In light of the rapid growth and development of social media, it has become the focus of interest in many different scientific fields. They seek to extract useful information from it, and this is called (knowledge), s...In light of the rapid growth and development of social media, it has become the focus of interest in many different scientific fields. They seek to extract useful information from it, and this is called (knowledge), such as extracting information related to people’s behaviors and interactions to analyze feelings or understand the behavior of users or groups, and many others. This extracted knowledge has a very important role in decision-making, creating and improving marketing objectives and competitive advantage, monitoring events, whether political or economic, and development in all fields. Therefore, to extract this knowledge, we need to analyze the vast amount of data found within social media using the most popular data mining techniques and applications related to social media sites.展开更多
In this article, the relationship between the knowledge of competitors and the development of new products in the field of capital medical equipment has been investigated. In order to identify the criteria for measuri...In this article, the relationship between the knowledge of competitors and the development of new products in the field of capital medical equipment has been investigated. In order to identify the criteria for measuring competitors’ knowledge and developing new capital medical equipment products, marketing experts were interviewed and then a researcher-made questionnaire was compiled and distributed among the statistical sample of the research. Also, in order to achieve the goals of the research, a questionnaire among 100 members of the statistical community was selected, distributed and collected. To analyze the gathered data, the structural equation modeling (SEM) method was used in the SMART PLS 2 software to estimate the model and then the K-MEAN approach was used to cluster the capital medical equipment market based on the knowledge of actual and potential competitors. The results have shown that the knowledge of potential and actual competitors has a positive and significant effect on the development of new products in the capital medical equipment market. From the point of view of the knowledge of actual competitors, the market of “MRI”, “Ultrasound” and “SPECT” is grouped in the low knowledge cluster;“Pet MRI”, “CT Scan”, “Mammography”, “Radiography, Fluoroscopy and CRM”, “Pet CT”, “SPECT CT” and “Gamma Camera” markets are clustered in the medium knowledge. Finally, “Angiography” and “CBCT” markets are located in the knowledge cluster. From the perspective of knowledge of potential competitors, the market of “angiography”, “mammography”, “SPECT” and “SPECT CT” in the low knowledge cluster, “CT scan”, “radiography, fluoroscopy and CRM”, “pet CT”, “CBCT” markets in the medium knowledge cluster and “MRI”, “pet MRI”, “ultrasound” and “gamma camera” markets in the high knowledge cluster are located.展开更多
Mobile networks possess significant information and thus are considered a gold mine for the researcher’s community.The call detail records(CDR)of a mobile network are used to identify the network’s efficacy and the ...Mobile networks possess significant information and thus are considered a gold mine for the researcher’s community.The call detail records(CDR)of a mobile network are used to identify the network’s efficacy and the mobile user’s behavior.It is evident from the recent literature that cyber-physical systems(CPS)were used in the analytics and modeling of telecom data.In addition,CPS is used to provide valuable services in smart cities.In general,a typical telecom company hasmillions of subscribers and thus generatesmassive amounts of data.From this aspect,data storage,analysis,and processing are the key concerns.To solve these issues,herein we propose a multilevel cyber-physical social system(CPSS)for the analysis and modeling of large internet data.Our proposed multilevel system has three levels and each level has a specific functionality.Initially,raw Call Detail Data(CDR)was collected at the first level.Herein,the data preprocessing,cleaning,and error removal operations were performed.In the second level,data processing,cleaning,reduction,integration,processing,and storage were performed.Herein,suggested internet activity record measures were applied.Our proposed system initially constructs a graph and then performs network analysis.Thus proposed CPSS system accurately identifies different areas of internet peak usage in a city(Milan city).Our research is helpful for the network operators to plan effective network configuration,management,and optimization of resources.展开更多
A P-vector method was optimized using variational data assimilation technique, with which the vertical structures and seasonal variations of zonal velocities and transports were investigated. The results showed that w...A P-vector method was optimized using variational data assimilation technique, with which the vertical structures and seasonal variations of zonal velocities and transports were investigated. The results showed that westward and eastward flowes occur in the Luzon Strait in the same period in a year. However the net volume transport is westward. In the upper level (0m -500m),the westward flow exits in the middle and south of the Luzon Strait, and the eastward flow exits in the north. There are two centers of westward flow and one center of eastward flow. In the middle of the Luzon Strait, westward and eastward flowes appear alternately in vertical direction. The westward flow strengthens in winter and weakens in summer. The net volume transport is strong in winter (5.53 Sv) but weak in summer (0.29 Sv). Except in summer, the volume transport in the upper level accounts for more than half of the total volume transport (0m bottom). In summer, the net volume transport in the upper level is eastward (1.01 Sv), but westward underneath.展开更多
Rapid advancements of the Industrial Internet of Things(IIoT)and artificial intelligence(AI)pose serious security issues by revealing secret data.Therefore,security data becomes a crucial issue in IIoT communication w...Rapid advancements of the Industrial Internet of Things(IIoT)and artificial intelligence(AI)pose serious security issues by revealing secret data.Therefore,security data becomes a crucial issue in IIoT communication where secrecy needs to be guaranteed in real time.Practically,AI techniques can be utilized to design image steganographic techniques in IIoT.In addition,encryption techniques act as an important role to save the actual information generated from the IIoT devices to avoid unauthorized access.In order to accomplish secure data transmission in IIoT environment,this study presents novel encryption with image steganography based data hiding technique(EISDHT)for IIoT environment.The proposed EIS-DHT technique involves a new quantum black widow optimization(QBWO)to competently choose the pixel values for hiding secrete data in the cover image.In addition,the multi-level discrete wavelet transform(DWT)based transformation process takes place.Besides,the secret image is divided into three R,G,and B bands which are then individually encrypted using Blowfish,Twofish,and Lorenz Hyperchaotic System.At last,the stego image gets generated by placing the encrypted images into the optimum pixel locations of the cover image.In order to validate the enhanced data hiding performance of the EIS-DHT technique,a set of simulation analyses take place and the results are inspected interms of different measures.The experimental outcomes stated the supremacy of the EIS-DHT technique over the other existing techniques and ensure maximum security.展开更多
A P - vector method is optimized using the variational data assimilation technique(VDAT). The absolute geostrophic velocity fields in the vicinity of the Luzon Strait (LS) are calculated, the spatial structures and se...A P - vector method is optimized using the variational data assimilation technique(VDAT). The absolute geostrophic velocity fields in the vicinity of the Luzon Strait (LS) are calculated, the spatial structures and seasonal variations of the absolute geostrophic velocity field are investigated. Our results show that the Kuroshio enters the South China Sea (SCS) in the south and middle of the Luzon Strait and flows out in the north, so the Kuroshio makes a slight clockwise curve in the Luzon Strait, and the curve is strong in winter and weak in summer. During the winter, a westward current appears in the surface, and locates at the west of the Luzon Strait. It is the north part of a cyclonic gyre which exits in the northeast of the SCS; an anti-cyclonic gyre occurs on the intermediate level, and it exits in the northeast of the SCS, and an eastward current exits in the southeast of the anti-cyclonic gyre.展开更多
In this paper, three techniques, line run coding, quadtree DF (Depth-First) representation and H coding for compressing classified satellite cloud images with no distortion are presented. In these three codings, the f...In this paper, three techniques, line run coding, quadtree DF (Depth-First) representation and H coding for compressing classified satellite cloud images with no distortion are presented. In these three codings, the first two were invented by other persons and the third one, by ourselves. As a result, the comparison among their compression rates is. given at the end of this paper. Further application of these image compression technique to satellite data and other meteorological data looks promising.展开更多
This paper deals with the application of data mining techniques to the conceptual design knowledge for a LV (launch vehicle) with a HRE (hybrid rocket engine). This LV is a concept of the space transportation, whi...This paper deals with the application of data mining techniques to the conceptual design knowledge for a LV (launch vehicle) with a HRE (hybrid rocket engine). This LV is a concept of the space transportation, which can deliver micro-satellite to the SSO (sun-synchronous orbit). To design the higher performance LV with HRE, the optimum size of each component, such as an oxidizer tank containing liquid oxidizer, a combustion chamber containing solid fuel, a pressurizing tank and a nozzle, should be acquired. The Kriging based ANOVA (analysis of variance) and SOM (self-organizing map) are employed as data mining techniques for knowledge discovery. In this study, the paraffin (FT-0070) is used as a propellant of HRE. Then, the relationship among LV performances and design variables are investigated through the analysis and the visualization. To calculate the engine performance, the regression rate is computed based on an empirical expression. The design knowledge is extracted for the design knowledge of the multi-stage LV with HRE by analysis using ANOVA and SOM. As a result, the useful design knowledge on the present design problem is obtained to design HRE for space transportation.展开更多
As COVID-19 poses a major threat to people’s health and economy,there is an urgent need for forecasting methodologies that can anticipate its trajectory efficiently.In non-stationary time series forecasting jobs,ther...As COVID-19 poses a major threat to people’s health and economy,there is an urgent need for forecasting methodologies that can anticipate its trajectory efficiently.In non-stationary time series forecasting jobs,there is frequently a hysteresis in the anticipated values relative to the real values.The multilayer deep-time convolutional network and a feature fusion network are combined in this paper’s proposal of an enhanced Multilayer Deep Time Convolutional Neural Network(MDTCNet)for COVID-19 prediction to address this problem.In particular,it is possible to record the deep features and temporal dependencies in uncertain time series,and the features may then be combined using a feature fusion network and a multilayer perceptron.Last but not least,the experimental verification is conducted on the prediction task of COVID-19 real daily confirmed cases in the world and the United States with uncertainty,realizing the short-term and long-term prediction of COVID-19 daily confirmed cases,and verifying the effectiveness and accuracy of the suggested prediction method,as well as reducing the hysteresis of the prediction results.展开更多
In the early time of oilfield development, insufficient production data and unclear understanding of oil production presented a challenge to reservoir engineers in devising effective development plans. To address this...In the early time of oilfield development, insufficient production data and unclear understanding of oil production presented a challenge to reservoir engineers in devising effective development plans. To address this challenge, this study proposes a method using data mining technology to search for similar oil fields and predict well productivity. A query system of 135 analogy parameters is established based on geological and reservoir engineering research, and the weight values of these parameters are calculated using a data algorithm to establish an analogy system. The fuzzy matter-element algorithm is then used to calculate the similarity between oil fields, with fields having similarity greater than 70% identified as similar oil fields. Using similar oil fields as sample data, 8 important factors affecting well productivity are identified using the Pearson coefficient and mean decrease impurity(MDI) method. To establish productivity prediction models, linear regression(LR), random forest regression(RF), support vector regression(SVR), backpropagation(BP), extreme gradient boosting(XGBoost), and light gradient boosting machine(Light GBM) algorithms are used. Their performance is evaluated using the coefficient of determination(R^(2)), explained variance score(EV), mean squared error(MSE), and mean absolute error(MAE) metrics. The Light GBM model is selected to predict the productivity of 30 wells in the PL field with an average error of only 6.31%, which significantly improves the accuracy of the productivity prediction and meets the application requirements in the field. Finally, a software platform integrating data query,oil field analogy, productivity prediction, and knowledge base is established to identify patterns in massive reservoir development data and provide valuable technical references for new reservoir development.展开更多
The Large sky Area Multi-Object Fiber Spectroscopic Telescope(LAMOST) general survey is a spectroscopic survey that will eventually cover approximately half of the celestial sphere and collect 10 million spectra of ...The Large sky Area Multi-Object Fiber Spectroscopic Telescope(LAMOST) general survey is a spectroscopic survey that will eventually cover approximately half of the celestial sphere and collect 10 million spectra of stars, galaxies and QSOs. Objects in both the pilot survey and the first year regular survey are included in the LAMOST DR1. The pilot survey started in October 2011 and ended in June 2012, and the data have been released to the public as the LAMOST Pilot Data Release in August 2012. The regular survey started in September 2012, and completed its first year of operation in June 2013. The LAMOST DR1 includes a total of 1202 plates containing 2 955 336 spectra, of which 1 790 879 spectra have observed signalto-noise ratio(SNR) ≥ 10. All data with SNR ≥ 2 are formally released as LAMOST DR1 under the LAMOST data policy. This data release contains a total of 2 204 696 spectra, of which 1 944 329 are stellar spectra, 12 082 are galaxy spectra and 5017 are quasars. The DR1 not only includes spectra, but also three stellar catalogs with measured parameters: late A,FGK-type stars with high quality spectra(1 061 918 entries), A-type stars(100 073 entries), and M-type stars(121 522 entries). This paper introduces the survey design, the observational and instrumental limitations, data reduction and analysis, and some caveats. A description of the FITS structure of spectral files and parameter catalogs is also provided.展开更多
This paper describes the data release of the LAMOST pilot survey, which includes data reduction, calibration, spectral analysis, data products and data access. The accuracy of the released data and the information abo...This paper describes the data release of the LAMOST pilot survey, which includes data reduction, calibration, spectral analysis, data products and data access. The accuracy of the released data and the information about the FITS headers of spectra are also introduced. The released data set includes 319 000 spectra and a catalog of these objects.展开更多
Based on years of input from the four geodetic techniques (SLR, GPS, VLBI and DORIS), the strategies of the combination were studied in SHAO to generate a new global terrestrial reference frame as the material reali...Based on years of input from the four geodetic techniques (SLR, GPS, VLBI and DORIS), the strategies of the combination were studied in SHAO to generate a new global terrestrial reference frame as the material realization of the ITRS defined in IERS Conventions. The main input includes the time series of weekly solutions (or fortnightly for SLR 1983-1993) of observational data for satellite techniques and session-wise normal equations for VLBI. The set of estimated unknowns includes 3- dimensional Cartesian coordinates at the reference epoch 2005.0 of the stations distributed globally and their rates as well as the time series of consistent Earth Orientation Parameters (EOPs) at the same epochs as the input. Besides the final solution, namely SOL-2, generated by using all the inputs before 2015.0 obtained from short-term observation processing, another reference solution, namely SOL- 1, was also computed by using the input before 2009.0 based on the same combination of procedures for the purpose of comparison with ITRF2008 and DTRF2008 and for evaluating the effect of the latest six more years of data on the combined results. The estimated accuracy of the x-component and y-component of the SOL- 1 TRF-origin was better than 0.1 mm at epoch 2005.0 and better than 0.3 mm yr- 1 in time evolution, either compared with ITRF2008 or DTRF2008. However, the z-component of the translation parameters from SOL-1 to ITRF2008 and DTRF2008 were 3.4 mm and -1.0 ram, respectively. It seems that the z-component of the SOL-1 TRF-origin was much closer to the one in DTRF2008 than the one in ITRF2008. The translation parameters from SOL-2 to ITRF2014 were 2.2, -1.8 and 0.9 mm in the x-, y- and z-components respectively with rates smaller than 0.4 mmyr-1. Similarly, the scale factor transformed from SOL-1 to DTRF2008 was much smaller than that to ITRF2008. The scale parameter from SOL-2 to ITRF2014 was -0.31 ppb with a rate lower than 0.01 ppb yr-1. The external precision (WRMS) compared with IERS EOP 08 C04 of the combined EOP series was smaller than 0.06 mas for the polar motions, smaller than 0.01 ms for the UT1-UTC and smaller than 0.02 ms for the LODs. The precision of the EOPs in SOL-2 was slightly higher than that of SOL-1.展开更多
Many business applications rely on their historical data to predict their business future. The marketing products process is one of the core processes for the business. Customer needs give a useful piece of informatio...Many business applications rely on their historical data to predict their business future. The marketing products process is one of the core processes for the business. Customer needs give a useful piece of information that help</span><span style="font-family:Verdana;"><span style="font-family:Verdana;">s</span></span><span style="font-family:Verdana;"> to market the appropriate products at the appropriate time. Moreover, services are considered recently as products. The development of education and health services </span><span style="font-family:Verdana;"><span style="font-family:Verdana;">is</span></span><span style="font-family:Verdana;"> depending on historical data. For the more, reducing online social media networks problems and crimes need a significant source of information. Data analysts need to use an efficient classification algorithm to predict the future of such businesses. However, dealing with a huge quantity of data requires great time to process. Data mining involves many useful techniques that are used to predict statistical data in a variety of business applications. The classification technique is one of the most widely used with a variety of algorithms. In this paper, various classification algorithms are revised in terms of accuracy in different areas of data mining applications. A comprehensive analysis is made after delegated reading of 20 papers in the literature. This paper aims to help data analysts to choose the most suitable classification algorithm for different business applications including business in general, online social media networks, agriculture, health, and education. Results show FFBPN is the most accurate algorithm in the business domain. The Random Forest algorithm is the most accurate in classifying online social networks (OSN) activities. Na<span style="white-space:nowrap;">ï</span>ve Bayes algorithm is the most accurate to classify agriculture datasets. OneR is the most accurate algorithm to classify instances within the health domain. The C4.5 Decision Tree algorithm is the most accurate to classify students’ records to predict degree completion time.展开更多
The study of marine data visualization is of great value. Marine data, due to its large scale, random variation and multiresolution in nature, are hard to be visualized and analyzed. Nowadays, constructing an ocean mo...The study of marine data visualization is of great value. Marine data, due to its large scale, random variation and multiresolution in nature, are hard to be visualized and analyzed. Nowadays, constructing an ocean model and visualizing model results have become some of the most important research topics of ‘Digital Ocean'. In this paper, a spherical ray casting method is developed to improve the traditional ray-casting algorithm and to make efficient use of GPUs. Aiming at the ocean current data, a 3D view-dependent line integral convolution method is used, in which the spatial frequency is adapted according to the distance from a camera. The study is based on a 3D virtual reality and visualization engine, namely the VV-Ocean. Some interactive operations are also provided to highlight the interesting structures and the characteristics of volumetric data. Finally, the marine data gathered in the East China Sea are displayed and analyzed. The results show that the method meets the requirements of real-time and interactive rendering.展开更多
To improve our understanding of the formation and evolution of the Moon, one of the payloads onboard the Chang'e-3 (CE-3) rover is Lunar Penetrating Radar (LPR). This investigation is the first attempt to explore...To improve our understanding of the formation and evolution of the Moon, one of the payloads onboard the Chang'e-3 (CE-3) rover is Lunar Penetrating Radar (LPR). This investigation is the first attempt to explore the lunar subsurface structure by using ground penetrating radar with high resolution. We have probed the subsur- face to a depth of several hundred meters using LPR. In-orbit testing, data processing and the preliminary results are presented. These observations have revealed the con- figuration of regolith where the thickness of regolith varies from about 4 m to 6 m. In addition, one layer of lunar rock, which is about 330 m deep and might have been accumulated during the depositional hiatus of mare basalts, was detected.展开更多
Stokes inversion calculation is a key process in resolving polarization information on radiation from the Sun and obtaining the associated vector magnetic fields. Even in the cases of simple local thermo- dynamic equi...Stokes inversion calculation is a key process in resolving polarization information on radiation from the Sun and obtaining the associated vector magnetic fields. Even in the cases of simple local thermo- dynamic equilibrium (LTE) and where the Milne-Eddington approximation is valid, the inversion problem may not be easy to solve. The initial values for the iterations are important in handling the case with mul- tiple minima. In this paper, we develop a fast inversion technique without iterations. The time taken for computation is only 1/100 the time that the iterative algorithm takes. In addition, it can provide available initial values even in cases with lower spectral resolutions. This strategy is useful for a filter-type Stokes spectrograph, such as SDO/HMI and the developed two-dimensional real-time spectrograph (2DS).展开更多
In this paper,an advanced YOLOv7 model is proposed to tackle the challenges associated with ship detection and recognition tasks,such as the irregular shapes and varying sizes of ships.The improved model replaces the ...In this paper,an advanced YOLOv7 model is proposed to tackle the challenges associated with ship detection and recognition tasks,such as the irregular shapes and varying sizes of ships.The improved model replaces the fixed anchor boxes utilized in conventional YOLOv7 models with a set of more suitable anchor boxes specifically designed based on the size distribution of ships in the dataset.This paper also introduces a novel multi-scale feature fusion module,which comprises Path Aggregation Network(PAN)modules,enabling the efficient capture of ship features across different scales.Furthermore,data preprocessing is enhanced through the application of data augmentation techniques,including random rotation,scaling,and cropping,which serve to bolster data diversity and robustness.The distribution of positive and negative samples in the dataset is balanced using random sampling,ensuring a more accurate representation of real-world scenarios.Comprehensive experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art approaches in terms of both detection accuracy and robustness,highlighting the potential of the improved YOLOv7 model for practical applications in the maritime domain.展开更多
A new model is proposed in this paper on color edge detection that uses the second derivative operators and data fusion mechanism.The secondorder neighborhood shows the connection between the current pixel and the sur...A new model is proposed in this paper on color edge detection that uses the second derivative operators and data fusion mechanism.The secondorder neighborhood shows the connection between the current pixel and the surroundings of this pixel.This connection is for each RGB component color of the input image.Once the image edges are detected for the three primary colors:red,green,and blue,these colors are merged using the combination rule.Then,the final decision is applied to obtain the segmentation.This process allows different data sources to be combined,which is essential to improve the image information quality and have an optimal image segmentation.Finally,the segmentation results of the proposed model are validated.Moreover,the classification accuracy of the tested data is assessed,and a comparison with other current models is conducted.The comparison results show that the proposed model outperforms the existing models in image segmentation.展开更多
Unmanned vehicles are currently facing many difficulties and challenges in improving safety performance when running in complex urban road traffic environments,such as low intelligence and poor comfort perfor-mance in...Unmanned vehicles are currently facing many difficulties and challenges in improving safety performance when running in complex urban road traffic environments,such as low intelligence and poor comfort perfor-mance in the driving process.The real-time performance of vehicles and the comfort requirements of passengers in path planning and tracking control of unmanned vehicles have attracted more and more attentions.In this paper,in order to improve the real-time performance of the autonomous vehicle planning module and the comfort requirements of passengers that a local granular-based path planning method and tracking control based on multi-segment Bezier curve splicing and model predictive control theory are pro-posed.Especially,the maximum trajectory curvature satisfying ride comfort is regarded as an important constraint condition,and the corresponding curvature threshold is utilized to calculate the control points of Bezier curve.By using low-order interpolation curve splicing,the planning computation is reduced,and the real-time performance of planning is improved,com-pared with one-segment curve fitting method.Furthermore,the comfort performance of the planned path is reflected intuitively by the curvature information of the path.Finally,the effectiveness of the proposed control method is verified by the co-simulation platform built by MATLAB/Simulink and Carsim.The simulation results show that the path tracking effect of multi-segment Bezier curve fitting is better than that of high-order curve planning in terms of real-time performance and comfort.展开更多
文摘In light of the rapid growth and development of social media, it has become the focus of interest in many different scientific fields. They seek to extract useful information from it, and this is called (knowledge), such as extracting information related to people’s behaviors and interactions to analyze feelings or understand the behavior of users or groups, and many others. This extracted knowledge has a very important role in decision-making, creating and improving marketing objectives and competitive advantage, monitoring events, whether political or economic, and development in all fields. Therefore, to extract this knowledge, we need to analyze the vast amount of data found within social media using the most popular data mining techniques and applications related to social media sites.
文摘In this article, the relationship between the knowledge of competitors and the development of new products in the field of capital medical equipment has been investigated. In order to identify the criteria for measuring competitors’ knowledge and developing new capital medical equipment products, marketing experts were interviewed and then a researcher-made questionnaire was compiled and distributed among the statistical sample of the research. Also, in order to achieve the goals of the research, a questionnaire among 100 members of the statistical community was selected, distributed and collected. To analyze the gathered data, the structural equation modeling (SEM) method was used in the SMART PLS 2 software to estimate the model and then the K-MEAN approach was used to cluster the capital medical equipment market based on the knowledge of actual and potential competitors. The results have shown that the knowledge of potential and actual competitors has a positive and significant effect on the development of new products in the capital medical equipment market. From the point of view of the knowledge of actual competitors, the market of “MRI”, “Ultrasound” and “SPECT” is grouped in the low knowledge cluster;“Pet MRI”, “CT Scan”, “Mammography”, “Radiography, Fluoroscopy and CRM”, “Pet CT”, “SPECT CT” and “Gamma Camera” markets are clustered in the medium knowledge. Finally, “Angiography” and “CBCT” markets are located in the knowledge cluster. From the perspective of knowledge of potential competitors, the market of “angiography”, “mammography”, “SPECT” and “SPECT CT” in the low knowledge cluster, “CT scan”, “radiography, fluoroscopy and CRM”, “pet CT”, “CBCT” markets in the medium knowledge cluster and “MRI”, “pet MRI”, “ultrasound” and “gamma camera” markets in the high knowledge cluster are located.
基金supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2021R1A6A1A03039493).
文摘Mobile networks possess significant information and thus are considered a gold mine for the researcher’s community.The call detail records(CDR)of a mobile network are used to identify the network’s efficacy and the mobile user’s behavior.It is evident from the recent literature that cyber-physical systems(CPS)were used in the analytics and modeling of telecom data.In addition,CPS is used to provide valuable services in smart cities.In general,a typical telecom company hasmillions of subscribers and thus generatesmassive amounts of data.From this aspect,data storage,analysis,and processing are the key concerns.To solve these issues,herein we propose a multilevel cyber-physical social system(CPSS)for the analysis and modeling of large internet data.Our proposed multilevel system has three levels and each level has a specific functionality.Initially,raw Call Detail Data(CDR)was collected at the first level.Herein,the data preprocessing,cleaning,and error removal operations were performed.In the second level,data processing,cleaning,reduction,integration,processing,and storage were performed.Herein,suggested internet activity record measures were applied.Our proposed system initially constructs a graph and then performs network analysis.Thus proposed CPSS system accurately identifies different areas of internet peak usage in a city(Milan city).Our research is helpful for the network operators to plan effective network configuration,management,and optimization of resources.
基金Supported by the Major State Basic Research Program (No. G1999043810) Open Laboratory for Tropical Marine Environmental Dynamics (LED)+2 种基金 South China Sea Institute of Oceanology Chinese Academy of Sciences and the NSFC (No. 40306004).
文摘A P-vector method was optimized using variational data assimilation technique, with which the vertical structures and seasonal variations of zonal velocities and transports were investigated. The results showed that westward and eastward flowes occur in the Luzon Strait in the same period in a year. However the net volume transport is westward. In the upper level (0m -500m),the westward flow exits in the middle and south of the Luzon Strait, and the eastward flow exits in the north. There are two centers of westward flow and one center of eastward flow. In the middle of the Luzon Strait, westward and eastward flowes appear alternately in vertical direction. The westward flow strengthens in winter and weakens in summer. The net volume transport is strong in winter (5.53 Sv) but weak in summer (0.29 Sv). Except in summer, the volume transport in the upper level accounts for more than half of the total volume transport (0m bottom). In summer, the net volume transport in the upper level is eastward (1.01 Sv), but westward underneath.
基金This research work was funded by Institution Fund projects under Grant No.(IFPRC-215-249-2020)Therefore,authors gratefully acknowledge technical and financial support from the Ministry of Education and King Abdulaziz University,DSR,Jeddah,Saudi Arabia.
文摘Rapid advancements of the Industrial Internet of Things(IIoT)and artificial intelligence(AI)pose serious security issues by revealing secret data.Therefore,security data becomes a crucial issue in IIoT communication where secrecy needs to be guaranteed in real time.Practically,AI techniques can be utilized to design image steganographic techniques in IIoT.In addition,encryption techniques act as an important role to save the actual information generated from the IIoT devices to avoid unauthorized access.In order to accomplish secure data transmission in IIoT environment,this study presents novel encryption with image steganography based data hiding technique(EISDHT)for IIoT environment.The proposed EIS-DHT technique involves a new quantum black widow optimization(QBWO)to competently choose the pixel values for hiding secrete data in the cover image.In addition,the multi-level discrete wavelet transform(DWT)based transformation process takes place.Besides,the secret image is divided into three R,G,and B bands which are then individually encrypted using Blowfish,Twofish,and Lorenz Hyperchaotic System.At last,the stego image gets generated by placing the encrypted images into the optimum pixel locations of the cover image.In order to validate the enhanced data hiding performance of the EIS-DHT technique,a set of simulation analyses take place and the results are inspected interms of different measures.The experimental outcomes stated the supremacy of the EIS-DHT technique over the other existing techniques and ensure maximum security.
基金This work was supported by the knowledge Innovation Project of the Chinese Academy of Sciences under contract Grant No. KZCX2- 205) the National Natural Science Foundation of China under contract Grand No. 40106002.
文摘A P - vector method is optimized using the variational data assimilation technique(VDAT). The absolute geostrophic velocity fields in the vicinity of the Luzon Strait (LS) are calculated, the spatial structures and seasonal variations of the absolute geostrophic velocity field are investigated. Our results show that the Kuroshio enters the South China Sea (SCS) in the south and middle of the Luzon Strait and flows out in the north, so the Kuroshio makes a slight clockwise curve in the Luzon Strait, and the curve is strong in winter and weak in summer. During the winter, a westward current appears in the surface, and locates at the west of the Luzon Strait. It is the north part of a cyclonic gyre which exits in the northeast of the SCS; an anti-cyclonic gyre occurs on the intermediate level, and it exits in the northeast of the SCS, and an eastward current exits in the southeast of the anti-cyclonic gyre.
文摘In this paper, three techniques, line run coding, quadtree DF (Depth-First) representation and H coding for compressing classified satellite cloud images with no distortion are presented. In these three codings, the first two were invented by other persons and the third one, by ourselves. As a result, the comparison among their compression rates is. given at the end of this paper. Further application of these image compression technique to satellite data and other meteorological data looks promising.
文摘This paper deals with the application of data mining techniques to the conceptual design knowledge for a LV (launch vehicle) with a HRE (hybrid rocket engine). This LV is a concept of the space transportation, which can deliver micro-satellite to the SSO (sun-synchronous orbit). To design the higher performance LV with HRE, the optimum size of each component, such as an oxidizer tank containing liquid oxidizer, a combustion chamber containing solid fuel, a pressurizing tank and a nozzle, should be acquired. The Kriging based ANOVA (analysis of variance) and SOM (self-organizing map) are employed as data mining techniques for knowledge discovery. In this study, the paraffin (FT-0070) is used as a propellant of HRE. Then, the relationship among LV performances and design variables are investigated through the analysis and the visualization. To calculate the engine performance, the regression rate is computed based on an empirical expression. The design knowledge is extracted for the design knowledge of the multi-stage LV with HRE by analysis using ANOVA and SOM. As a result, the useful design knowledge on the present design problem is obtained to design HRE for space transportation.
基金supported by the major scientific and technological research project of Chongqing Education Commission(KJZD-M202000802)The first batch of Industrial and Informatization Key Special Fund Support Projects in Chongqing in 2022(2022000537).
文摘As COVID-19 poses a major threat to people’s health and economy,there is an urgent need for forecasting methodologies that can anticipate its trajectory efficiently.In non-stationary time series forecasting jobs,there is frequently a hysteresis in the anticipated values relative to the real values.The multilayer deep-time convolutional network and a feature fusion network are combined in this paper’s proposal of an enhanced Multilayer Deep Time Convolutional Neural Network(MDTCNet)for COVID-19 prediction to address this problem.In particular,it is possible to record the deep features and temporal dependencies in uncertain time series,and the features may then be combined using a feature fusion network and a multilayer perceptron.Last but not least,the experimental verification is conducted on the prediction task of COVID-19 real daily confirmed cases in the world and the United States with uncertainty,realizing the short-term and long-term prediction of COVID-19 daily confirmed cases,and verifying the effectiveness and accuracy of the suggested prediction method,as well as reducing the hysteresis of the prediction results.
基金supported by the National Natural Science Fund of China (No.52104049)the Science Foundation of China University of Petroleum,Beijing (No.2462022BJRC004)。
文摘In the early time of oilfield development, insufficient production data and unclear understanding of oil production presented a challenge to reservoir engineers in devising effective development plans. To address this challenge, this study proposes a method using data mining technology to search for similar oil fields and predict well productivity. A query system of 135 analogy parameters is established based on geological and reservoir engineering research, and the weight values of these parameters are calculated using a data algorithm to establish an analogy system. The fuzzy matter-element algorithm is then used to calculate the similarity between oil fields, with fields having similarity greater than 70% identified as similar oil fields. Using similar oil fields as sample data, 8 important factors affecting well productivity are identified using the Pearson coefficient and mean decrease impurity(MDI) method. To establish productivity prediction models, linear regression(LR), random forest regression(RF), support vector regression(SVR), backpropagation(BP), extreme gradient boosting(XGBoost), and light gradient boosting machine(Light GBM) algorithms are used. Their performance is evaluated using the coefficient of determination(R^(2)), explained variance score(EV), mean squared error(MSE), and mean absolute error(MAE) metrics. The Light GBM model is selected to predict the productivity of 30 wells in the PL field with an average error of only 6.31%, which significantly improves the accuracy of the productivity prediction and meets the application requirements in the field. Finally, a software platform integrating data query,oil field analogy, productivity prediction, and knowledge base is established to identify patterns in massive reservoir development data and provide valuable technical references for new reservoir development.
基金funded by the National Basic Research Program of China (973 Program, 2014CB845700)the National Natural Science Foundation of China (Grant Nos. 11390371)Funding for the project has been provided by the National Development and Reform Commission
文摘The Large sky Area Multi-Object Fiber Spectroscopic Telescope(LAMOST) general survey is a spectroscopic survey that will eventually cover approximately half of the celestial sphere and collect 10 million spectra of stars, galaxies and QSOs. Objects in both the pilot survey and the first year regular survey are included in the LAMOST DR1. The pilot survey started in October 2011 and ended in June 2012, and the data have been released to the public as the LAMOST Pilot Data Release in August 2012. The regular survey started in September 2012, and completed its first year of operation in June 2013. The LAMOST DR1 includes a total of 1202 plates containing 2 955 336 spectra, of which 1 790 879 spectra have observed signalto-noise ratio(SNR) ≥ 10. All data with SNR ≥ 2 are formally released as LAMOST DR1 under the LAMOST data policy. This data release contains a total of 2 204 696 spectra, of which 1 944 329 are stellar spectra, 12 082 are galaxy spectra and 5017 are quasars. The DR1 not only includes spectra, but also three stellar catalogs with measured parameters: late A,FGK-type stars with high quality spectra(1 061 918 entries), A-type stars(100 073 entries), and M-type stars(121 522 entries). This paper introduces the survey design, the observational and instrumental limitations, data reduction and analysis, and some caveats. A description of the FITS structure of spectral files and parameter catalogs is also provided.
文摘This paper describes the data release of the LAMOST pilot survey, which includes data reduction, calibration, spectral analysis, data products and data access. The accuracy of the released data and the information about the FITS headers of spectra are also introduced. The released data set includes 319 000 spectra and a catalog of these objects.
基金supported by the Ministry of Science and Technology of China(2015FY310200)the National Key Research and Development Program of China(2016YFB0501405)+1 种基金the National Natural Science Foundation of China(11173048 and 11403076)the State Key Laboratory of Aerospace Dynamics and the Crustal Movement Observation Network of China(CMONOC)
文摘Based on years of input from the four geodetic techniques (SLR, GPS, VLBI and DORIS), the strategies of the combination were studied in SHAO to generate a new global terrestrial reference frame as the material realization of the ITRS defined in IERS Conventions. The main input includes the time series of weekly solutions (or fortnightly for SLR 1983-1993) of observational data for satellite techniques and session-wise normal equations for VLBI. The set of estimated unknowns includes 3- dimensional Cartesian coordinates at the reference epoch 2005.0 of the stations distributed globally and their rates as well as the time series of consistent Earth Orientation Parameters (EOPs) at the same epochs as the input. Besides the final solution, namely SOL-2, generated by using all the inputs before 2015.0 obtained from short-term observation processing, another reference solution, namely SOL- 1, was also computed by using the input before 2009.0 based on the same combination of procedures for the purpose of comparison with ITRF2008 and DTRF2008 and for evaluating the effect of the latest six more years of data on the combined results. The estimated accuracy of the x-component and y-component of the SOL- 1 TRF-origin was better than 0.1 mm at epoch 2005.0 and better than 0.3 mm yr- 1 in time evolution, either compared with ITRF2008 or DTRF2008. However, the z-component of the translation parameters from SOL-1 to ITRF2008 and DTRF2008 were 3.4 mm and -1.0 ram, respectively. It seems that the z-component of the SOL-1 TRF-origin was much closer to the one in DTRF2008 than the one in ITRF2008. The translation parameters from SOL-2 to ITRF2014 were 2.2, -1.8 and 0.9 mm in the x-, y- and z-components respectively with rates smaller than 0.4 mmyr-1. Similarly, the scale factor transformed from SOL-1 to DTRF2008 was much smaller than that to ITRF2008. The scale parameter from SOL-2 to ITRF2014 was -0.31 ppb with a rate lower than 0.01 ppb yr-1. The external precision (WRMS) compared with IERS EOP 08 C04 of the combined EOP series was smaller than 0.06 mas for the polar motions, smaller than 0.01 ms for the UT1-UTC and smaller than 0.02 ms for the LODs. The precision of the EOPs in SOL-2 was slightly higher than that of SOL-1.
文摘Many business applications rely on their historical data to predict their business future. The marketing products process is one of the core processes for the business. Customer needs give a useful piece of information that help</span><span style="font-family:Verdana;"><span style="font-family:Verdana;">s</span></span><span style="font-family:Verdana;"> to market the appropriate products at the appropriate time. Moreover, services are considered recently as products. The development of education and health services </span><span style="font-family:Verdana;"><span style="font-family:Verdana;">is</span></span><span style="font-family:Verdana;"> depending on historical data. For the more, reducing online social media networks problems and crimes need a significant source of information. Data analysts need to use an efficient classification algorithm to predict the future of such businesses. However, dealing with a huge quantity of data requires great time to process. Data mining involves many useful techniques that are used to predict statistical data in a variety of business applications. The classification technique is one of the most widely used with a variety of algorithms. In this paper, various classification algorithms are revised in terms of accuracy in different areas of data mining applications. A comprehensive analysis is made after delegated reading of 20 papers in the literature. This paper aims to help data analysts to choose the most suitable classification algorithm for different business applications including business in general, online social media networks, agriculture, health, and education. Results show FFBPN is the most accurate algorithm in the business domain. The Random Forest algorithm is the most accurate in classifying online social networks (OSN) activities. Na<span style="white-space:nowrap;">ï</span>ve Bayes algorithm is the most accurate to classify agriculture datasets. OneR is the most accurate algorithm to classify instances within the health domain. The C4.5 Decision Tree algorithm is the most accurate to classify students’ records to predict degree completion time.
基金supported by the Natural Science Foundation of China under Project 41076115the Global Change Research Program of China under project 2012CB955603the Public Science and Technology Research Funds of the Ocean under project 201005019
文摘The study of marine data visualization is of great value. Marine data, due to its large scale, random variation and multiresolution in nature, are hard to be visualized and analyzed. Nowadays, constructing an ocean model and visualizing model results have become some of the most important research topics of ‘Digital Ocean'. In this paper, a spherical ray casting method is developed to improve the traditional ray-casting algorithm and to make efficient use of GPUs. Aiming at the ocean current data, a 3D view-dependent line integral convolution method is used, in which the spatial frequency is adapted according to the distance from a camera. The study is based on a 3D virtual reality and visualization engine, namely the VV-Ocean. Some interactive operations are also provided to highlight the interesting structures and the characteristics of volumetric data. Finally, the marine data gathered in the East China Sea are displayed and analyzed. The results show that the method meets the requirements of real-time and interactive rendering.
基金Supported by the National Natural Science Foundation of China
文摘To improve our understanding of the formation and evolution of the Moon, one of the payloads onboard the Chang'e-3 (CE-3) rover is Lunar Penetrating Radar (LPR). This investigation is the first attempt to explore the lunar subsurface structure by using ground penetrating radar with high resolution. We have probed the subsur- face to a depth of several hundred meters using LPR. In-orbit testing, data processing and the preliminary results are presented. These observations have revealed the con- figuration of regolith where the thickness of regolith varies from about 4 m to 6 m. In addition, one layer of lunar rock, which is about 330 m deep and might have been accumulated during the depositional hiatus of mare basalts, was detected.
基金funded by the Key Laboratory of Solar Activity of Chinese Academy of Sciences and the National Science Foundationsupported by the National Natural Science Foundation of China (Grant Nos. 11178005 and 11427901)the Strategic Priority Research Program of the Chinese Academy of Sciences (XDB09040200)
文摘Stokes inversion calculation is a key process in resolving polarization information on radiation from the Sun and obtaining the associated vector magnetic fields. Even in the cases of simple local thermo- dynamic equilibrium (LTE) and where the Milne-Eddington approximation is valid, the inversion problem may not be easy to solve. The initial values for the iterations are important in handling the case with mul- tiple minima. In this paper, we develop a fast inversion technique without iterations. The time taken for computation is only 1/100 the time that the iterative algorithm takes. In addition, it can provide available initial values even in cases with lower spectral resolutions. This strategy is useful for a filter-type Stokes spectrograph, such as SDO/HMI and the developed two-dimensional real-time spectrograph (2DS).
基金supported by the Key R&D Project of Hainan Province(Grant No.ZDYF2022GXJS348,ZDYF2022SHFZ039).
文摘In this paper,an advanced YOLOv7 model is proposed to tackle the challenges associated with ship detection and recognition tasks,such as the irregular shapes and varying sizes of ships.The improved model replaces the fixed anchor boxes utilized in conventional YOLOv7 models with a set of more suitable anchor boxes specifically designed based on the size distribution of ships in the dataset.This paper also introduces a novel multi-scale feature fusion module,which comprises Path Aggregation Network(PAN)modules,enabling the efficient capture of ship features across different scales.Furthermore,data preprocessing is enhanced through the application of data augmentation techniques,including random rotation,scaling,and cropping,which serve to bolster data diversity and robustness.The distribution of positive and negative samples in the dataset is balanced using random sampling,ensuring a more accurate representation of real-world scenarios.Comprehensive experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art approaches in terms of both detection accuracy and robustness,highlighting the potential of the improved YOLOv7 model for practical applications in the maritime domain.
文摘A new model is proposed in this paper on color edge detection that uses the second derivative operators and data fusion mechanism.The secondorder neighborhood shows the connection between the current pixel and the surroundings of this pixel.This connection is for each RGB component color of the input image.Once the image edges are detected for the three primary colors:red,green,and blue,these colors are merged using the combination rule.Then,the final decision is applied to obtain the segmentation.This process allows different data sources to be combined,which is essential to improve the image information quality and have an optimal image segmentation.Finally,the segmentation results of the proposed model are validated.Moreover,the classification accuracy of the tested data is assessed,and a comparison with other current models is conducted.The comparison results show that the proposed model outperforms the existing models in image segmentation.
基金supported by the National Natural Science Foundation of China(62003062)Chongqing Natural Science Foundation Project(Grant No.cstc2020jcyj-msxmX0803,cstc2020jcyj-msxmX0077)+1 种基金Chongqing Municipal Education Commission Scientific Research Project(Grant No.KJQN202100824)Chongqing Technology and Business University Postgraduate Innovative Scientific Research Project(Grant No.yjscxx2021-122-44).
文摘Unmanned vehicles are currently facing many difficulties and challenges in improving safety performance when running in complex urban road traffic environments,such as low intelligence and poor comfort perfor-mance in the driving process.The real-time performance of vehicles and the comfort requirements of passengers in path planning and tracking control of unmanned vehicles have attracted more and more attentions.In this paper,in order to improve the real-time performance of the autonomous vehicle planning module and the comfort requirements of passengers that a local granular-based path planning method and tracking control based on multi-segment Bezier curve splicing and model predictive control theory are pro-posed.Especially,the maximum trajectory curvature satisfying ride comfort is regarded as an important constraint condition,and the corresponding curvature threshold is utilized to calculate the control points of Bezier curve.By using low-order interpolation curve splicing,the planning computation is reduced,and the real-time performance of planning is improved,com-pared with one-segment curve fitting method.Furthermore,the comfort performance of the planned path is reflected intuitively by the curvature information of the path.Finally,the effectiveness of the proposed control method is verified by the co-simulation platform built by MATLAB/Simulink and Carsim.The simulation results show that the path tracking effect of multi-segment Bezier curve fitting is better than that of high-order curve planning in terms of real-time performance and comfort.