In response to the lack of reliable physical parameters in the process simulation of the butadiene extraction,a large amount of phase equilibrium data were collected in the context of the actual process of butadiene p...In response to the lack of reliable physical parameters in the process simulation of the butadiene extraction,a large amount of phase equilibrium data were collected in the context of the actual process of butadiene production by acetonitrile.The accuracy of five prediction methods,UNIFAC(UNIQUAC Functional-group Activity Coefficients),UNIFAC-LL,UNIFAC-LBY,UNIFAC-DMD and COSMO-RS,applied to the butadiene extraction process was verified using partial phase equilibrium data.The results showed that the UNIFAC-DMD method had the highest accuracy in predicting phase equilibrium data for the missing system.COSMO-RS-predicted multiple systems showed good accuracy,and a large number of missing phase equilibrium data were estimated using the UNIFAC-DMD method and COSMO-RS method.The predicted phase equilibrium data were checked for consistency.The NRTL-RK(non-Random Two Liquid-Redlich-Kwong Equation of State)and UNIQUAC thermodynamic models were used to correlate the phase equilibrium data.Industrial device simulations were used to verify the accuracy of the thermodynamic model applied to the butadiene extraction process.The simulation results showed that the average deviations of the simulated results using the correlated thermodynamic model from the actual values were less than 2%compared to that using the commercial simulation software,Aspen Plus and its database.The average deviation was much smaller than that of the simulations using the Aspen Plus database(>10%),indicating that the obtained phase equilibrium data are highly accurate and reliable.The best phase equilibrium data and thermodynamic model parameters for butadiene extraction are provided.This improves the accuracy and reliability of the design,optimization and control of the process,and provides a basis and guarantee for developing a more environmentally friendly and economical butadiene extraction process.展开更多
Due to the restricted satellite payloads in LEO mega-constellation networks(LMCNs),remote sensing image analysis,online learning and other big data services desirably need onboard distributed processing(OBDP).In exist...Due to the restricted satellite payloads in LEO mega-constellation networks(LMCNs),remote sensing image analysis,online learning and other big data services desirably need onboard distributed processing(OBDP).In existing technologies,the efficiency of big data applications(BDAs)in distributed systems hinges on the stable-state and low-latency links between worker nodes.However,LMCNs with high-dynamic nodes and long-distance links can not provide the above conditions,which makes the performance of OBDP hard to be intuitively measured.To bridge this gap,a multidimensional simulation platform is indispensable that can simulate the network environment of LMCNs and put BDAs in it for performance testing.Using STK's APIs and parallel computing framework,we achieve real-time simulation for thousands of satellite nodes,which are mapped as application nodes through software defined network(SDN)and container technologies.We elaborate the architecture and mechanism of the simulation platform,and take the Starlink and Hadoop as realistic examples for simulations.The results indicate that LMCNs have dynamic end-to-end latency which fluctuates periodically with the constellation movement.Compared to ground data center networks(GDCNs),LMCNs deteriorate the computing and storage job throughput,which can be alleviated by the utilization of erasure codes and data flow scheduling of worker nodes.展开更多
A novel method for noise removal from the rotating accelerometer gravity gradiometer(MAGG)is presented.It introduces a head-to-tail data expansion technique based on the zero-phase filtering principle.A scheme for det...A novel method for noise removal from the rotating accelerometer gravity gradiometer(MAGG)is presented.It introduces a head-to-tail data expansion technique based on the zero-phase filtering principle.A scheme for determining band-pass filter parameters based on signal-to-noise ratio gain,smoothness index,and cross-correlation coefficient is designed using the Chebyshev optimal consistent approximation theory.Additionally,a wavelet denoising evaluation function is constructed,with the dmey wavelet basis function identified as most effective for processing gravity gradient data.The results of hard-in-the-loop simulation and prototype experiments show that the proposed processing method has shown a 14%improvement in the measurement variance of gravity gradient signals,and the measurement accuracy has reached within 4E,compared to other commonly used methods,which verifies that the proposed method effectively removes noise from the gradient signals,improved gravity gradiometry accuracy,and has certain technical insights for high-precision airborne gravity gradiometry.展开更多
The convergence of Internet of Things(IoT),5G,and cloud collaboration offers tailored solutions to the rigorous demands of multi-flow integrated energy aggregation dispatch data processing.While generative adversarial...The convergence of Internet of Things(IoT),5G,and cloud collaboration offers tailored solutions to the rigorous demands of multi-flow integrated energy aggregation dispatch data processing.While generative adversarial networks(GANs)are instrumental in resource scheduling,their application in this domain is impeded by challenges such as convergence speed,inferior optimality searching capability,and the inability to learn from failed decision making feedbacks.Therefore,a cloud-edge collaborative federated GAN-based communication and computing resource scheduling algorithm with long-term constraint violation sensitiveness is proposed to address these challenges.The proposed algorithm facilitates real-time,energy-efficient data processing by optimizing transmission power control,data migration,and computing resource allocation.It employs federated learning for global parameter aggregation to enhance GAN parameter updating and dynamically adjusts GAN learning rates and global aggregation weights based on energy consumption constraint violations.Simulation results indicate that the proposed algorithm effectively reduces data processing latency,energy consumption,and convergence time.展开更多
Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometri...Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.展开更多
Since 2008 a network of five sea-level monitoring stations was progressively installed in French Polynesia.The stations are autonomous and data,collected at a sampling rate of 1 or 2 min,are not only recorded locally,...Since 2008 a network of five sea-level monitoring stations was progressively installed in French Polynesia.The stations are autonomous and data,collected at a sampling rate of 1 or 2 min,are not only recorded locally,but also transferred in real time by a radio-link to the NOAA through the GOES satellite.The new ET34-ANA-V80 version of ETERNA,initially developed for Earth Tides analysis,is now able to analyze ocean tides records.Through a two-step validation scheme,we took advantage of the flexibility of this new version,operated in conjunction with the preprocessing facilities of the Tsoft software,to recover co rrected data series able to model sea-level variations after elimination of the ocean tides signal.We performed the tidal analysis of the tide gauge data with the highest possible selectivity(optimal wave grouping)and a maximum of additional terms(shallow water constituents).Our goal was to provide corrected data series and modelled ocean tides signal to compute tide-free sea-level variations as well as tidal prediction models with centimeter precision.We also present in this study the characteristics of the ocean tides in French Polynesia and preliminary results concerning the non-tidal variations of the sea level concerning the tide gauge setting.展开更多
The networks are fundamental to our modern world and they appear throughout science and society.Access to a massive amount of data presents a unique opportunity to the researcher’s community.As networks grow in size ...The networks are fundamental to our modern world and they appear throughout science and society.Access to a massive amount of data presents a unique opportunity to the researcher’s community.As networks grow in size the complexity increases and our ability to analyze them using the current state of the art is at severe risk of failing to keep pace.Therefore,this paper initiates a discussion on graph signal processing for large-scale data analysis.We first provide a comprehensive overview of core ideas in Graph signal processing(GSP)and their connection to conventional digital signal processing(DSP).We then summarize recent developments in developing basic GSP tools,including methods for graph filtering or graph learning,graph signal,graph Fourier transform(GFT),spectrum,graph frequency,etc.Graph filtering is a basic task that allows for isolating the contribution of individual frequencies and therefore enables the removal of noise.We then consider a graph filter as a model that helps to extend the application of GSP methods to large datasets.To show the suitability and the effeteness,we first created a noisy graph signal and then applied it to the filter.After several rounds of simulation results.We see that the filtered signal appears to be smoother and is closer to the original noise-free distance-based signal.By using this example application,we thoroughly demonstrated that graph filtration is efficient for big data analytics.展开更多
The current velocity observation of LADCP(Lowered Acoustic Doppler Current Profiler)has the advantages of a large vertical range of observation and high operability compared with traditional current measurement method...The current velocity observation of LADCP(Lowered Acoustic Doppler Current Profiler)has the advantages of a large vertical range of observation and high operability compared with traditional current measurement methods,and is being widely used in the field of ocean observation.Shear and inverse methods are now commonly used by the international marine community to process LADCP data and calculate ocean current profiles.The two methods have their advantages and shortcomings.The shear method calculates the value of current shear more accurately,while the accuracy in an absolute value of the current is lower.The inverse method calculates the absolute value of the current velocity more accurately,but the current shear is less accurate.Based on the shear method,this paper proposes a layering shear method to calculate the current velocity profile by“layering averaging”,and proposes corresponding current calculation methods according to the different types of problems in several field observation data from the western Pacific,forming an independent LADCP data processing system.The comparison results have shown that the layering shear method can achieve the same effect as the inverse method in the calculation of the absolute value of current velocity,while retaining the advantages of the shear method in the calculation of a value of the current shear.展开更多
The Kuiyang-ST2000 deep-towed high-resolution multichannel seismic system was designed by the First Institute of Oceanography,Ministry of Natural Resources(FIO,MNR).The system is mainly composed of a plasma spark sour...The Kuiyang-ST2000 deep-towed high-resolution multichannel seismic system was designed by the First Institute of Oceanography,Ministry of Natural Resources(FIO,MNR).The system is mainly composed of a plasma spark source(source level:216 dB,main frequency:750 Hz,frequency bandwidth:150-1200 Hz)and a towed hydrophone streamer with 48 channels.Because the source and the towed hydrophone streamer are constantly moving according to the towing configuration,the accurate positioning of the towing hydrophone array and the moveout correction of deep-towed multichannel seismic data processing before imaging are challenging.Initially,according to the characteristics of the system and the towing streamer shape in deep water,travel-time positioning method was used to construct the hydrophone streamer shape,and the results were corrected by using the polynomial curve fitting method.Then,a new data-processing workflow for Kuiyang-ST2000 system data was introduced,mainly including float datum setting,residual static correction,phase-based moveout correction,which allows the imaging algorithms of conventional marine seismic data processing to extend to deep-towed seismic data.We successfully applied the Kuiyang-ST2000 system and methodology of data processing to a gas hydrate survey of the Qiongdongnan and Shenhu areas in the South China Sea,and the results show that the profile has very high vertical and lateral resolutions(0.5 m and 8 m,respectively),which can provide full and accurate details of gas hydrate-related and geohazard sedimentary and structural features in the South China Sea.展开更多
With the continued development of multiple Global Navigation Satellite Systems(GNSS)and the emergence of various frequencies,UnDifferenced and UnCombined(UDUC)data processing has become an increasingly attractive opti...With the continued development of multiple Global Navigation Satellite Systems(GNSS)and the emergence of various frequencies,UnDifferenced and UnCombined(UDUC)data processing has become an increasingly attractive option.In this contribution,we provide an overview of the current status of UDUC GNSS data processing activities in China.These activities encompass the formulation of Precise Point Positioning(PPP)models and PPP-Real-Time Kinematic(PPP-RTK)models for processing single-station and multi-station GNSS data,respectively.Regarding single-station data processing,we discuss the advancements in PPP models,particularly the extension from a single system to multiple systems,and from dual frequencies to single and multiple frequencies.Additionally,we introduce the modified PPP model,which accounts for the time variation of receiver code biases,a departure from the conventional PPP model that typically assumes these biases to be time-constant.In the realm of multi-station PPP-RTK data processing,we introduce the ionosphere-weighted PPP-RTK model,which enhances the model strength by considering the spatial correlation of ionospheric delays.We also review the phase-only PPP-RTK model,designed to mitigate the impact of unmodelled code-related errors.Furthermore,we explore GLONASS PPP-RTK,achieved through the application of the integer-estimable model.For large-scale network data processing,we introduce the all-in-view PPP-RTK model,which alleviates the strict common-view requirement at all receivers.Moreover,we present the decentralized PPP-RTK data processing strategy,designed to improve computational efficiency.Overall,this work highlights the various advancements in UDUC GNSS data processing,providing insights into the state-of-the-art techniques employed in China to achieve precise GNSS applications.展开更多
The inter-agency government information sharing(IAGIS)plays an important role in improving service and efficiency of government agencies.Currently,there is still no effective and secure way for data-driven IAGIS to fu...The inter-agency government information sharing(IAGIS)plays an important role in improving service and efficiency of government agencies.Currently,there is still no effective and secure way for data-driven IAGIS to fulfill dynamic demands of information sharing between government agencies.Motivated by blockchain and data mining,a data-driven framework is proposed for IAGIS in this paper.Firstly,the blockchain is used as the core to design the whole framework for monitoring and preventing leakage and abuse of government information,in order to guarantee information security.Secondly,a four-layer architecture is designed for implementing the proposed framework.Thirdly,the classical data mining algorithms PageRank and Apriori are applied to dynamically design smart contracts for information sharing,for the purposed of flexibly adjusting the information sharing strategies according to the practical demands of government agencies for public management and public service.Finally,a case study is presented to illustrate the operation of the proposed framework.展开更多
Timestamps play a key role in process mining because it determines the chronology of which events occurred and subsequently how they are ordered in process modelling.The timestamp in process mining gives an insight on...Timestamps play a key role in process mining because it determines the chronology of which events occurred and subsequently how they are ordered in process modelling.The timestamp in process mining gives an insight on process performance,conformance,and modelling.This therefore means problems with the timestamp will result in misrepresentations of the mined process.A few articles have been published on the quantification of data quality problems but just one of the articles at the time of this paper is based on the quantification of timestamp quality problems.This article evaluates the quality of timestamps in event log across two axes using eleven quality dimensions and four levels of potential data quality problems.The eleven data quality dimensions were obtained by doing a thorough literature review of more than fifty process mining articles which focus on quality dimensions.This evaluation resulted in twelve data quality quantification metrics and the metrics were applied to the MIMIC-ll dataset as an illustration.The outcome of the timestamp quality quantification using the proposed typology enabled the user to appreciate the quality of the event log and thus makes it possible to evaluate the risk of carrying out specific data cleaning measures to improve the process mining outcome.展开更多
The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of ...The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of the IoMT,particularly in the context of knowledge‐based learning systems.Smart healthcare systems leverage knowledge‐based learning to become more context‐aware,adaptable,and auditable while maintain-ing the ability to learn from historical data.In smart healthcare systems,devices capture images,such as X‐rays,Magnetic Resonance Imaging.The security and integrity of these images are crucial for the databases used in knowledge‐based learning systems to foster structured decision‐making and enhance the learning abilities of AI.Moreover,in knowledge‐driven systems,the storage and transmission of HD medical images exert a burden on the limited bandwidth of the communication channel,leading to data trans-mission delays.To address the security and latency concerns,this paper presents a lightweight medical image encryption scheme utilising bit‐plane decomposition and chaos theory.The results of the experiment yield entropy,energy,and correlation values of 7.999,0.0156,and 0.0001,respectively.This validates the effectiveness of the encryption system proposed in this paper,which offers high‐quality encryption,a large key space,key sensitivity,and resistance to statistical attacks.展开更多
One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelli...One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelligence (AI) havebecome the basis for making strategic decisions in many sensitive areas, such as fraud detection, risk management,medical diagnosis, and counter-terrorism. However, there is still a need to assess how terrorist attacks are related,initiated, and detected. For this purpose, we propose a novel framework for classifying and predicting terroristattacks. The proposed framework posits that neglected text attributes included in the Global Terrorism Database(GTD) can influence the accuracy of the model’s classification of terrorist attacks, where each part of the datacan provide vital information to enrich the ability of classifier learning. Each data point in a multiclass taxonomyhas one or more tags attached to it, referred as “related tags.” We applied machine learning classifiers to classifyterrorist attack incidents obtained from the GTD. A transformer-based technique called DistilBERT extracts andlearns contextual features from text attributes to acquiremore information from text data. The extracted contextualfeatures are combined with the “key features” of the dataset and used to perform the final classification. Thestudy explored different experimental setups with various classifiers to evaluate the model’s performance. Theexperimental results show that the proposed framework outperforms the latest techniques for classifying terroristattacks with an accuracy of 98.7% using a combined feature set and extreme gradient boosting classifier.展开更多
The Yutu-2 rover onboard the Chang’E-4 mission performed the first lunar penetrating radar detection on the farside of the Moon.The high-frequency channel presented us with many unprecedented details of the subsurfac...The Yutu-2 rover onboard the Chang’E-4 mission performed the first lunar penetrating radar detection on the farside of the Moon.The high-frequency channel presented us with many unprecedented details of the subsurface structures within a depth of approximately 50 m.However,it was still difficult to identify finer layers from the cluttered reflections and scattering waves.We applied deconvolution to improve the vertical resolution of the radar profile by extending the limited bandwidth associated with the emissive radar pulse.To overcome the challenges arising from the mixed-phase wavelets and the problematic amplification of noise,we performed predictive deconvolution to remove the minimum-phase components from the Chang’E-4 dataset,followed by a comprehensive phase rotation to rectify phase anomalies in the radar image.Subsequently,we implemented irreversible migration filtering to mitigate the noise and diminutive clutter echoes amplified by deconvolution.The processed data showed evident enhancement of the vertical resolution with a widened bandwidth in the frequency domain and better signal clarity in the time domain,providing us with more undisputed details of subsurface structures near the Chang’E-4 landing site.展开更多
In order to obtain high-precision GPS control point results and provide high-precision known points for various projects,this study uses a variety of mature GPS post-processing software to process the observation data...In order to obtain high-precision GPS control point results and provide high-precision known points for various projects,this study uses a variety of mature GPS post-processing software to process the observation data of the GPS control network of Guanyinge Reservoir,and compares the results obtained by several kinds of software.According to the test results,the reasons for the accuracy differences between different software are analyzed,and the optimal results are obtained in the analysis and comparison.The purpose of this paper is to provide useful reference for GPS software users to process data.展开更多
This paper addresses the estimation problem of an unknown drift parameter matrix for a fractional Ornstein-Uhlenbeck process in a multi-dimensional setting.To tackle this problem,we propose a novel approach based on r...This paper addresses the estimation problem of an unknown drift parameter matrix for a fractional Ornstein-Uhlenbeck process in a multi-dimensional setting.To tackle this problem,we propose a novel approach based on rough path theory that allows us to construct pathwise rough path estimators from both continuous and discrete observations of a single path.Our approach is particularly suitable for high-frequency data.To formulate the parameter estimators,we introduce a theory of pathwise Itôintegrals with respect to fractional Brownian motion.By establishing the regularity of fractional Ornstein-Uhlenbeck processes and analyzing the long-term behavior of the associated Lévy area processes,we demonstrate that our estimators are strongly consistent and pathwise stable.Our findings offer a new perspective on estimating the drift parameter matrix for fractional Ornstein-Uhlenbeck processes in multi-dimensional settings,and may have practical implications for fields including finance,economics,and engineering.展开更多
The invention concept of Robotic Process Automation (RPA) has emerged as a transformative technology that has revolved the local business processes by programming repetitive task and efficiency adjusting the operation...The invention concept of Robotic Process Automation (RPA) has emerged as a transformative technology that has revolved the local business processes by programming repetitive task and efficiency adjusting the operations. This research had focused on developing the RPA environment and its future features in order to elaborate on the projected policies based on its comprehensive experiences. The current and previous situations of industry are looking for IT solutions to fully scale their company Improve business flexibility, improve customer satisfaction, improve productivity, accuracy and reduce costs, quick scalability in RPA has currently appeared as an advance technology with exceptional performance. It emphasizes future trends and foresees the evolution of RPA by integrating artificial intelligence, learning of machine and cognitive automation into RPA frameworks. Moreover, it has analyzed the technical constraints, including the scalability, security issues and interoperability, while investigating regulatory and ethical considerations that are so important to the ethical utilization of RPA. By providing a comprehensive analysis of RPA with new future trends in this study, researcher’s ambitions to provide valuable insights the benefits of it on industrial performances from the gap observed so as to guide the strategic decision and future implementation of the RPA.展开更多
This article discusses the current status and development strategies of computer science and technology in the context of big data.Firstly,it explains the relationship between big data and computer science and technol...This article discusses the current status and development strategies of computer science and technology in the context of big data.Firstly,it explains the relationship between big data and computer science and technology,focusing on analyzing the current application status of computer science and technology in big data,including data storage,data processing,and data analysis.Then,it proposes development strategies for big data processing.Computer science and technology play a vital role in big data processing by providing strong technical support.展开更多
The processing of measuri ng data plays an important role in reverse engineering. Based on grey system the ory, we first propose some methods to the processing of measuring data in revers e engineering. The measured d...The processing of measuri ng data plays an important role in reverse engineering. Based on grey system the ory, we first propose some methods to the processing of measuring data in revers e engineering. The measured data usually have some abnormalities. When the abnor mal data are eliminated by filtering, blanks are created. The grey generation an d GM(1,1) are used to create new data for these blanks. For the uneven data sequ en ce created by measuring error, the mean generation is used to smooth it and then the stepwise and smooth generations are used to improve the data sequence.展开更多
基金supported by the National Natural Science Foundation of China(22178190)。
文摘In response to the lack of reliable physical parameters in the process simulation of the butadiene extraction,a large amount of phase equilibrium data were collected in the context of the actual process of butadiene production by acetonitrile.The accuracy of five prediction methods,UNIFAC(UNIQUAC Functional-group Activity Coefficients),UNIFAC-LL,UNIFAC-LBY,UNIFAC-DMD and COSMO-RS,applied to the butadiene extraction process was verified using partial phase equilibrium data.The results showed that the UNIFAC-DMD method had the highest accuracy in predicting phase equilibrium data for the missing system.COSMO-RS-predicted multiple systems showed good accuracy,and a large number of missing phase equilibrium data were estimated using the UNIFAC-DMD method and COSMO-RS method.The predicted phase equilibrium data were checked for consistency.The NRTL-RK(non-Random Two Liquid-Redlich-Kwong Equation of State)and UNIQUAC thermodynamic models were used to correlate the phase equilibrium data.Industrial device simulations were used to verify the accuracy of the thermodynamic model applied to the butadiene extraction process.The simulation results showed that the average deviations of the simulated results using the correlated thermodynamic model from the actual values were less than 2%compared to that using the commercial simulation software,Aspen Plus and its database.The average deviation was much smaller than that of the simulations using the Aspen Plus database(>10%),indicating that the obtained phase equilibrium data are highly accurate and reliable.The best phase equilibrium data and thermodynamic model parameters for butadiene extraction are provided.This improves the accuracy and reliability of the design,optimization and control of the process,and provides a basis and guarantee for developing a more environmentally friendly and economical butadiene extraction process.
基金supported by National Natural Sciences Foundation of China(No.62271165,62027802,62201307)the Guangdong Basic and Applied Basic Research Foundation(No.2023A1515030297)+2 种基金the Shenzhen Science and Technology Program ZDSYS20210623091808025Stable Support Plan Program GXWD20231129102638002the Major Key Project of PCL(No.PCL2024A01)。
文摘Due to the restricted satellite payloads in LEO mega-constellation networks(LMCNs),remote sensing image analysis,online learning and other big data services desirably need onboard distributed processing(OBDP).In existing technologies,the efficiency of big data applications(BDAs)in distributed systems hinges on the stable-state and low-latency links between worker nodes.However,LMCNs with high-dynamic nodes and long-distance links can not provide the above conditions,which makes the performance of OBDP hard to be intuitively measured.To bridge this gap,a multidimensional simulation platform is indispensable that can simulate the network environment of LMCNs and put BDAs in it for performance testing.Using STK's APIs and parallel computing framework,we achieve real-time simulation for thousands of satellite nodes,which are mapped as application nodes through software defined network(SDN)and container technologies.We elaborate the architecture and mechanism of the simulation platform,and take the Starlink and Hadoop as realistic examples for simulations.The results indicate that LMCNs have dynamic end-to-end latency which fluctuates periodically with the constellation movement.Compared to ground data center networks(GDCNs),LMCNs deteriorate the computing and storage job throughput,which can be alleviated by the utilization of erasure codes and data flow scheduling of worker nodes.
文摘A novel method for noise removal from the rotating accelerometer gravity gradiometer(MAGG)is presented.It introduces a head-to-tail data expansion technique based on the zero-phase filtering principle.A scheme for determining band-pass filter parameters based on signal-to-noise ratio gain,smoothness index,and cross-correlation coefficient is designed using the Chebyshev optimal consistent approximation theory.Additionally,a wavelet denoising evaluation function is constructed,with the dmey wavelet basis function identified as most effective for processing gravity gradient data.The results of hard-in-the-loop simulation and prototype experiments show that the proposed processing method has shown a 14%improvement in the measurement variance of gravity gradient signals,and the measurement accuracy has reached within 4E,compared to other commonly used methods,which verifies that the proposed method effectively removes noise from the gradient signals,improved gravity gradiometry accuracy,and has certain technical insights for high-precision airborne gravity gradiometry.
基金supported by China Southern Power Grid Technology Project under Grant 03600KK52220019(GDKJXM20220253).
文摘The convergence of Internet of Things(IoT),5G,and cloud collaboration offers tailored solutions to the rigorous demands of multi-flow integrated energy aggregation dispatch data processing.While generative adversarial networks(GANs)are instrumental in resource scheduling,their application in this domain is impeded by challenges such as convergence speed,inferior optimality searching capability,and the inability to learn from failed decision making feedbacks.Therefore,a cloud-edge collaborative federated GAN-based communication and computing resource scheduling algorithm with long-term constraint violation sensitiveness is proposed to address these challenges.The proposed algorithm facilitates real-time,energy-efficient data processing by optimizing transmission power control,data migration,and computing resource allocation.It employs federated learning for global parameter aggregation to enhance GAN parameter updating and dynamically adjusts GAN learning rates and global aggregation weights based on energy consumption constraint violations.Simulation results indicate that the proposed algorithm effectively reduces data processing latency,energy consumption,and convergence time.
基金funded by the National Natural Science Foundation of China(NSFC,Nos.12373086 and 12303082)CAS“Light of West China”Program+2 种基金Yunnan Revitalization Talent Support Program in Yunnan ProvinceNational Key R&D Program of ChinaGravitational Wave Detection Project No.2022YFC2203800。
文摘Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.
基金funding from the“Talent Introduction Scientific Research Start-Up Fund”of Shandong University of Science and Technology(Grant number 0104060510217)the“Open Fund of State Key Laboratory of Geodesy and Earth’s Dynamics”(Grant number SKLGED2021-3-5)。
文摘Since 2008 a network of five sea-level monitoring stations was progressively installed in French Polynesia.The stations are autonomous and data,collected at a sampling rate of 1 or 2 min,are not only recorded locally,but also transferred in real time by a radio-link to the NOAA through the GOES satellite.The new ET34-ANA-V80 version of ETERNA,initially developed for Earth Tides analysis,is now able to analyze ocean tides records.Through a two-step validation scheme,we took advantage of the flexibility of this new version,operated in conjunction with the preprocessing facilities of the Tsoft software,to recover co rrected data series able to model sea-level variations after elimination of the ocean tides signal.We performed the tidal analysis of the tide gauge data with the highest possible selectivity(optimal wave grouping)and a maximum of additional terms(shallow water constituents).Our goal was to provide corrected data series and modelled ocean tides signal to compute tide-free sea-level variations as well as tidal prediction models with centimeter precision.We also present in this study the characteristics of the ocean tides in French Polynesia and preliminary results concerning the non-tidal variations of the sea level concerning the tide gauge setting.
基金supported in part by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2019R1A2C1006159)and(NRF-2021R1A6A1A03039493)by the 2021 Yeungnam University Research Grant.
文摘The networks are fundamental to our modern world and they appear throughout science and society.Access to a massive amount of data presents a unique opportunity to the researcher’s community.As networks grow in size the complexity increases and our ability to analyze them using the current state of the art is at severe risk of failing to keep pace.Therefore,this paper initiates a discussion on graph signal processing for large-scale data analysis.We first provide a comprehensive overview of core ideas in Graph signal processing(GSP)and their connection to conventional digital signal processing(DSP).We then summarize recent developments in developing basic GSP tools,including methods for graph filtering or graph learning,graph signal,graph Fourier transform(GFT),spectrum,graph frequency,etc.Graph filtering is a basic task that allows for isolating the contribution of individual frequencies and therefore enables the removal of noise.We then consider a graph filter as a model that helps to extend the application of GSP methods to large datasets.To show the suitability and the effeteness,we first created a noisy graph signal and then applied it to the filter.After several rounds of simulation results.We see that the filtered signal appears to be smoother and is closer to the original noise-free distance-based signal.By using this example application,we thoroughly demonstrated that graph filtration is efficient for big data analytics.
基金The National Natural Science Foundation of China under contract No.42206033the Marine Geological Survey Program of China Geological Survey under contract No.DD20221706+1 种基金the Research Foundation of National Engineering Research Center for Gas Hydrate Exploration and Development,Innovation Team Project,under contract No.2022GMGSCXYF41003the Scientific Research Fund of the Second Institute of Oceanography,Ministry of Natural Resources,under contract No.JG2006.
文摘The current velocity observation of LADCP(Lowered Acoustic Doppler Current Profiler)has the advantages of a large vertical range of observation and high operability compared with traditional current measurement methods,and is being widely used in the field of ocean observation.Shear and inverse methods are now commonly used by the international marine community to process LADCP data and calculate ocean current profiles.The two methods have their advantages and shortcomings.The shear method calculates the value of current shear more accurately,while the accuracy in an absolute value of the current is lower.The inverse method calculates the absolute value of the current velocity more accurately,but the current shear is less accurate.Based on the shear method,this paper proposes a layering shear method to calculate the current velocity profile by“layering averaging”,and proposes corresponding current calculation methods according to the different types of problems in several field observation data from the western Pacific,forming an independent LADCP data processing system.The comparison results have shown that the layering shear method can achieve the same effect as the inverse method in the calculation of the absolute value of current velocity,while retaining the advantages of the shear method in the calculation of a value of the current shear.
基金Supported by the National Key R&D Program of China(No.2016YFC0303900)the Laoshan Laboratory(Nos.MGQNLM-KF201807,LSKJ202203604)the National Natural Science Foundation of China(No.42106072)。
文摘The Kuiyang-ST2000 deep-towed high-resolution multichannel seismic system was designed by the First Institute of Oceanography,Ministry of Natural Resources(FIO,MNR).The system is mainly composed of a plasma spark source(source level:216 dB,main frequency:750 Hz,frequency bandwidth:150-1200 Hz)and a towed hydrophone streamer with 48 channels.Because the source and the towed hydrophone streamer are constantly moving according to the towing configuration,the accurate positioning of the towing hydrophone array and the moveout correction of deep-towed multichannel seismic data processing before imaging are challenging.Initially,according to the characteristics of the system and the towing streamer shape in deep water,travel-time positioning method was used to construct the hydrophone streamer shape,and the results were corrected by using the polynomial curve fitting method.Then,a new data-processing workflow for Kuiyang-ST2000 system data was introduced,mainly including float datum setting,residual static correction,phase-based moveout correction,which allows the imaging algorithms of conventional marine seismic data processing to extend to deep-towed seismic data.We successfully applied the Kuiyang-ST2000 system and methodology of data processing to a gas hydrate survey of the Qiongdongnan and Shenhu areas in the South China Sea,and the results show that the profile has very high vertical and lateral resolutions(0.5 m and 8 m,respectively),which can provide full and accurate details of gas hydrate-related and geohazard sedimentary and structural features in the South China Sea.
基金National Natural Science Foundation of China(No.42022025)。
文摘With the continued development of multiple Global Navigation Satellite Systems(GNSS)and the emergence of various frequencies,UnDifferenced and UnCombined(UDUC)data processing has become an increasingly attractive option.In this contribution,we provide an overview of the current status of UDUC GNSS data processing activities in China.These activities encompass the formulation of Precise Point Positioning(PPP)models and PPP-Real-Time Kinematic(PPP-RTK)models for processing single-station and multi-station GNSS data,respectively.Regarding single-station data processing,we discuss the advancements in PPP models,particularly the extension from a single system to multiple systems,and from dual frequencies to single and multiple frequencies.Additionally,we introduce the modified PPP model,which accounts for the time variation of receiver code biases,a departure from the conventional PPP model that typically assumes these biases to be time-constant.In the realm of multi-station PPP-RTK data processing,we introduce the ionosphere-weighted PPP-RTK model,which enhances the model strength by considering the spatial correlation of ionospheric delays.We also review the phase-only PPP-RTK model,designed to mitigate the impact of unmodelled code-related errors.Furthermore,we explore GLONASS PPP-RTK,achieved through the application of the integer-estimable model.For large-scale network data processing,we introduce the all-in-view PPP-RTK model,which alleviates the strict common-view requirement at all receivers.Moreover,we present the decentralized PPP-RTK data processing strategy,designed to improve computational efficiency.Overall,this work highlights the various advancements in UDUC GNSS data processing,providing insights into the state-of-the-art techniques employed in China to achieve precise GNSS applications.
基金Supported by the Project of Guangdong Science and Technology Department(2020B010166005)the Post-Doctoral Research Project(Z000158)+2 种基金the Ministry of Education Social Science Fund(22YJ630167)the Fund project of Department of Science and Technology of Guangdong Province(GDK TP2021032500)the Guangdong Philosophy and Social Science(GD22YYJ15).
文摘The inter-agency government information sharing(IAGIS)plays an important role in improving service and efficiency of government agencies.Currently,there is still no effective and secure way for data-driven IAGIS to fulfill dynamic demands of information sharing between government agencies.Motivated by blockchain and data mining,a data-driven framework is proposed for IAGIS in this paper.Firstly,the blockchain is used as the core to design the whole framework for monitoring and preventing leakage and abuse of government information,in order to guarantee information security.Secondly,a four-layer architecture is designed for implementing the proposed framework.Thirdly,the classical data mining algorithms PageRank and Apriori are applied to dynamically design smart contracts for information sharing,for the purposed of flexibly adjusting the information sharing strategies according to the practical demands of government agencies for public management and public service.Finally,a case study is presented to illustrate the operation of the proposed framework.
文摘Timestamps play a key role in process mining because it determines the chronology of which events occurred and subsequently how they are ordered in process modelling.The timestamp in process mining gives an insight on process performance,conformance,and modelling.This therefore means problems with the timestamp will result in misrepresentations of the mined process.A few articles have been published on the quantification of data quality problems but just one of the articles at the time of this paper is based on the quantification of timestamp quality problems.This article evaluates the quality of timestamps in event log across two axes using eleven quality dimensions and four levels of potential data quality problems.The eleven data quality dimensions were obtained by doing a thorough literature review of more than fifty process mining articles which focus on quality dimensions.This evaluation resulted in twelve data quality quantification metrics and the metrics were applied to the MIMIC-ll dataset as an illustration.The outcome of the timestamp quality quantification using the proposed typology enabled the user to appreciate the quality of the event log and thus makes it possible to evaluate the risk of carrying out specific data cleaning measures to improve the process mining outcome.
文摘The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of the IoMT,particularly in the context of knowledge‐based learning systems.Smart healthcare systems leverage knowledge‐based learning to become more context‐aware,adaptable,and auditable while maintain-ing the ability to learn from historical data.In smart healthcare systems,devices capture images,such as X‐rays,Magnetic Resonance Imaging.The security and integrity of these images are crucial for the databases used in knowledge‐based learning systems to foster structured decision‐making and enhance the learning abilities of AI.Moreover,in knowledge‐driven systems,the storage and transmission of HD medical images exert a burden on the limited bandwidth of the communication channel,leading to data trans-mission delays.To address the security and latency concerns,this paper presents a lightweight medical image encryption scheme utilising bit‐plane decomposition and chaos theory.The results of the experiment yield entropy,energy,and correlation values of 7.999,0.0156,and 0.0001,respectively.This validates the effectiveness of the encryption system proposed in this paper,which offers high‐quality encryption,a large key space,key sensitivity,and resistance to statistical attacks.
文摘One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelligence (AI) havebecome the basis for making strategic decisions in many sensitive areas, such as fraud detection, risk management,medical diagnosis, and counter-terrorism. However, there is still a need to assess how terrorist attacks are related,initiated, and detected. For this purpose, we propose a novel framework for classifying and predicting terroristattacks. The proposed framework posits that neglected text attributes included in the Global Terrorism Database(GTD) can influence the accuracy of the model’s classification of terrorist attacks, where each part of the datacan provide vital information to enrich the ability of classifier learning. Each data point in a multiclass taxonomyhas one or more tags attached to it, referred as “related tags.” We applied machine learning classifiers to classifyterrorist attack incidents obtained from the GTD. A transformer-based technique called DistilBERT extracts andlearns contextual features from text attributes to acquiremore information from text data. The extracted contextualfeatures are combined with the “key features” of the dataset and used to perform the final classification. Thestudy explored different experimental setups with various classifiers to evaluate the model’s performance. Theexperimental results show that the proposed framework outperforms the latest techniques for classifying terroristattacks with an accuracy of 98.7% using a combined feature set and extreme gradient boosting classifier.
基金supported by the National Natural Science Foundation of China(Grant Nos.42325406 and 42304187)the China Postdoctoral Science Foundation(Grant No.2023M733476)+3 种基金the CAS Project for Young Scientists in Basic Research(Grant No.YSBR082)the National Key R&D Program of China(Grant No.2022YFF0503203)the Key Research Program of the Institute of Geology and GeophysicsChinese Academy of Sciences(Grant Nos.IGGCAS-202101 and IGGCAS-202401).
文摘The Yutu-2 rover onboard the Chang’E-4 mission performed the first lunar penetrating radar detection on the farside of the Moon.The high-frequency channel presented us with many unprecedented details of the subsurface structures within a depth of approximately 50 m.However,it was still difficult to identify finer layers from the cluttered reflections and scattering waves.We applied deconvolution to improve the vertical resolution of the radar profile by extending the limited bandwidth associated with the emissive radar pulse.To overcome the challenges arising from the mixed-phase wavelets and the problematic amplification of noise,we performed predictive deconvolution to remove the minimum-phase components from the Chang’E-4 dataset,followed by a comprehensive phase rotation to rectify phase anomalies in the radar image.Subsequently,we implemented irreversible migration filtering to mitigate the noise and diminutive clutter echoes amplified by deconvolution.The processed data showed evident enhancement of the vertical resolution with a widened bandwidth in the frequency domain and better signal clarity in the time domain,providing us with more undisputed details of subsurface structures near the Chang’E-4 landing site.
文摘In order to obtain high-precision GPS control point results and provide high-precision known points for various projects,this study uses a variety of mature GPS post-processing software to process the observation data of the GPS control network of Guanyinge Reservoir,and compares the results obtained by several kinds of software.According to the test results,the reasons for the accuracy differences between different software are analyzed,and the optimal results are obtained in the analysis and comparison.The purpose of this paper is to provide useful reference for GPS software users to process data.
基金supported by Shanghai Artificial Intelligence Laboratory.
文摘This paper addresses the estimation problem of an unknown drift parameter matrix for a fractional Ornstein-Uhlenbeck process in a multi-dimensional setting.To tackle this problem,we propose a novel approach based on rough path theory that allows us to construct pathwise rough path estimators from both continuous and discrete observations of a single path.Our approach is particularly suitable for high-frequency data.To formulate the parameter estimators,we introduce a theory of pathwise Itôintegrals with respect to fractional Brownian motion.By establishing the regularity of fractional Ornstein-Uhlenbeck processes and analyzing the long-term behavior of the associated Lévy area processes,we demonstrate that our estimators are strongly consistent and pathwise stable.Our findings offer a new perspective on estimating the drift parameter matrix for fractional Ornstein-Uhlenbeck processes in multi-dimensional settings,and may have practical implications for fields including finance,economics,and engineering.
文摘The invention concept of Robotic Process Automation (RPA) has emerged as a transformative technology that has revolved the local business processes by programming repetitive task and efficiency adjusting the operations. This research had focused on developing the RPA environment and its future features in order to elaborate on the projected policies based on its comprehensive experiences. The current and previous situations of industry are looking for IT solutions to fully scale their company Improve business flexibility, improve customer satisfaction, improve productivity, accuracy and reduce costs, quick scalability in RPA has currently appeared as an advance technology with exceptional performance. It emphasizes future trends and foresees the evolution of RPA by integrating artificial intelligence, learning of machine and cognitive automation into RPA frameworks. Moreover, it has analyzed the technical constraints, including the scalability, security issues and interoperability, while investigating regulatory and ethical considerations that are so important to the ethical utilization of RPA. By providing a comprehensive analysis of RPA with new future trends in this study, researcher’s ambitions to provide valuable insights the benefits of it on industrial performances from the gap observed so as to guide the strategic decision and future implementation of the RPA.
文摘This article discusses the current status and development strategies of computer science and technology in the context of big data.Firstly,it explains the relationship between big data and computer science and technology,focusing on analyzing the current application status of computer science and technology in big data,including data storage,data processing,and data analysis.Then,it proposes development strategies for big data processing.Computer science and technology play a vital role in big data processing by providing strong technical support.
文摘The processing of measuri ng data plays an important role in reverse engineering. Based on grey system the ory, we first propose some methods to the processing of measuring data in revers e engineering. The measured data usually have some abnormalities. When the abnor mal data are eliminated by filtering, blanks are created. The grey generation an d GM(1,1) are used to create new data for these blanks. For the uneven data sequ en ce created by measuring error, the mean generation is used to smooth it and then the stepwise and smooth generations are used to improve the data sequence.