In light of the rapid growth and development of social media, it has become the focus of interest in many different scientific fields. They seek to extract useful information from it, and this is called (knowledge), s...In light of the rapid growth and development of social media, it has become the focus of interest in many different scientific fields. They seek to extract useful information from it, and this is called (knowledge), such as extracting information related to people’s behaviors and interactions to analyze feelings or understand the behavior of users or groups, and many others. This extracted knowledge has a very important role in decision-making, creating and improving marketing objectives and competitive advantage, monitoring events, whether political or economic, and development in all fields. Therefore, to extract this knowledge, we need to analyze the vast amount of data found within social media using the most popular data mining techniques and applications related to social media sites.展开更多
Big Data is reforming many industrial domains by providing decision support through analyzing large data volumes.Big Data testing aims to ensure that Big Data systems run smoothly and error-free while maintaining the ...Big Data is reforming many industrial domains by providing decision support through analyzing large data volumes.Big Data testing aims to ensure that Big Data systems run smoothly and error-free while maintaining the performance and quality of data.However,because of the diversity and complexity of data,testing Big Data is challenging.Though numerous research efforts deal with Big Data testing,a comprehensive review to address testing techniques and challenges of BigData is not available as yet.Therefore,we have systematically reviewed the Big Data testing techniques’evidence occurring in the period 2010–2021.This paper discusses testing data processing by highlighting the techniques used in every processing phase.Furthermore,we discuss the challenges and future directions.Our findings show that diverse functional,non-functional and combined(functional and non-functional)testing techniques have been used to solve specific problems related to Big Data.At the same time,most of the testing challenges have been faced during the MapReduce validation phase.In addition,the combinatorial testing technique is one of the most applied techniques in combination with other techniques(i.e.,random testing,mutation testing,input space partitioning and equivalence testing)to find various functional faults through Big Data testing.展开更多
In this article, the relationship between the knowledge of competitors and the development of new products in the field of capital medical equipment has been investigated. In order to identify the criteria for measuri...In this article, the relationship between the knowledge of competitors and the development of new products in the field of capital medical equipment has been investigated. In order to identify the criteria for measuring competitors’ knowledge and developing new capital medical equipment products, marketing experts were interviewed and then a researcher-made questionnaire was compiled and distributed among the statistical sample of the research. Also, in order to achieve the goals of the research, a questionnaire among 100 members of the statistical community was selected, distributed and collected. To analyze the gathered data, the structural equation modeling (SEM) method was used in the SMART PLS 2 software to estimate the model and then the K-MEAN approach was used to cluster the capital medical equipment market based on the knowledge of actual and potential competitors. The results have shown that the knowledge of potential and actual competitors has a positive and significant effect on the development of new products in the capital medical equipment market. From the point of view of the knowledge of actual competitors, the market of “MRI”, “Ultrasound” and “SPECT” is grouped in the low knowledge cluster;“Pet MRI”, “CT Scan”, “Mammography”, “Radiography, Fluoroscopy and CRM”, “Pet CT”, “SPECT CT” and “Gamma Camera” markets are clustered in the medium knowledge. Finally, “Angiography” and “CBCT” markets are located in the knowledge cluster. From the perspective of knowledge of potential competitors, the market of “angiography”, “mammography”, “SPECT” and “SPECT CT” in the low knowledge cluster, “CT scan”, “radiography, fluoroscopy and CRM”, “pet CT”, “CBCT” markets in the medium knowledge cluster and “MRI”, “pet MRI”, “ultrasound” and “gamma camera” markets in the high knowledge cluster are located.展开更多
The karst mountainous area is an ecologically fragile region with prominent humanland contradictions.The resource-environment carrying capacity(RECC)of this region needs to be further clarified.The development of remo...The karst mountainous area is an ecologically fragile region with prominent humanland contradictions.The resource-environment carrying capacity(RECC)of this region needs to be further clarified.The development of remote sensing(RS)and geographic information system(GIS)provides data sources and processing platform for RECC monitoring.This study analyzed and established the evaluation index system of RECC by considering particularity in the karst mountainous area of Southwest China;processed multisource RS data(Sentinel-2,Aster-DEM and Landsat-8)to extract the spatial distributions of nine key indexes by GIS techniques(information classification,overlay analysis and raster calculation);proposed the methods of index integration and fuzzy comprehensive evaluation of the RECC by GIS;and took a typical area,Guangnan County in Yunnan Province of China,as an experimental area to explore the effectiveness of the indexes and methods.The results showed that:(1)The important indexes affecting the RECC of karst mountainous area are water resources,tourism resources,position resources,geographical environment and soil erosion environment.(2)Data on cultivated land,construction land,minerals,transportation,water conservancy,ecosystem services,topography,soil erosion and rocky desertification can be obtained from RS data.GIS techniques integrate the information into the RECC results.The data extraction and processing methods are feasible on evaluating RECC.(3)The RECC of Guangnan County was in the mid-carrying level in 2018.The midcarrying and low-carrying levels were the main types,accounting for more than 80.00%of the total study area.The areas with high carrying capacity were mainly distributed in the northern regions of the northwest-southeast line of the county,and other areas have a low carrying capacity comparatively.The coordination between regional resource-environment status and socioeconomic development is the key to improve RECC.This study explores the evaluation index system of RECC in karst mountainous area and the application of multisource RS data and GIS techniques in the comprehensive evaluation.The methods can be applied in related fields to provide suggestions for data/information extraction and integration,and sustainable development.展开更多
Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometri...Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.展开更多
Beginning from the proposition that availability of reliable data is necessary to the application of nuclear techniques, we explore the questions of how such data are obtained and how the extent of their reliability i...Beginning from the proposition that availability of reliable data is necessary to the application of nuclear techniques, we explore the questions of how such data are obtained and how the extent of their reliability is ascertained. These questions are considered first in general terms in relation to data types and organizational frameworks, then with particular reference to the journal Atomic Data and Nuclear Data Tables. The reliability issue is further discussed in terms of this journal’s policies and unique presentation style.展开更多
To address the problem of real-time processing of ultra-wide bandwidth pulsar baseband data,we designed and implemented a pulsar baseband data processing algorithm(PSRDP)based on GPU parallel computing technology.PSRD...To address the problem of real-time processing of ultra-wide bandwidth pulsar baseband data,we designed and implemented a pulsar baseband data processing algorithm(PSRDP)based on GPU parallel computing technology.PSRDP can perform operations such as baseband data unpacking,channel separation,coherent dedispersion,Stokes detection,phase and folding period prediction,and folding integration in GPU clusters.We tested the algorithm using the J0437-4715 pulsar baseband data generated by the CASPSR and Medusa backends of the Parkes,and the J0332+5434 pulsar baseband data generated by the self-developed backend of the Nan Shan Radio Telescope.We obtained the pulse profiles of each baseband data.Through experimental analysis,we have found that the pulse profiles generated by the PSRDP algorithm in this paper are essentially consistent with the processing results of Digital Signal Processing Software for Pulsar Astronomy(DSPSR),which verified the effectiveness of the PSRDP algorithm.Furthermore,using the same baseband data,we compared the processing speed of PSRDP with DSPSR,and the results showed that PSRDP was not slower than DSPSR in terms of speed.The theoretical and technical experience gained from the PSRDP algorithm research in this article lays a technical foundation for the real-time processing of QTT(Qi Tai radio Telescope)ultra-wide bandwidth pulsar baseband data.展开更多
This study aims to assess and to evaluate band ratios, brovey and HSV (Hue-Saturation-Value) techniques for discrimination and mapping the basement rock units exposed at Wadi Bulghah area, Saudi Arabia using multispec...This study aims to assess and to evaluate band ratios, brovey and HSV (Hue-Saturation-Value) techniques for discrimination and mapping the basement rock units exposed at Wadi Bulghah area, Saudi Arabia using multispectral Landsat ETM+ and SPOT-5 panchromatic data.?FieldSpec instrument is utilized to collect the spectral data of diorite, marble, gossan and volcanics, the main rock units exposed at the study area. Spectral profile of diorite exhibits very distinguished absorption features around 2.20 μm and 2.35 μm wavelength regions. These absorption features lead to lowering the band ratio values within the band-7 wavelength region. Diorite intrusions appear to have grey and dark grey image signatures on 7/3 and 7/2 band ratio images respectively. On the false color composite ratio image (7/3:R;7/2:G and 5/2:B), diorite, marble, gossan and volcanics have very dark brown, dark blue, white and yellowish brown image signatures respectively. Image fusion between previously mentioned FCC ratio image and high spatial resolution (5 meters) SPOT-5 panchromatic image is carried out by using brovey and HSV transformation methods. Visual and statistical assessment methods prove that HSV fused image yields best image interpretability results rather than brovey image. It improves the spatial resolution of the original FCC ratios image with acceptable spectral preservation.展开更多
Quantitative analysis of digital images requires detection and segmentation of the borders of the object of interest. Accurate segmentation is required for volume determination, 3D rendering, radiation therapy, and su...Quantitative analysis of digital images requires detection and segmentation of the borders of the object of interest. Accurate segmentation is required for volume determination, 3D rendering, radiation therapy, and surgery planning. In medical images, segmentation has traditionally been done by human experts. Substantial computational and storage requirements become especially acute when object orientation and scale have to be considered. Therefore, automated or semi-automated segmentation techniques are essential if these software applications are ever to gain widespread clinical use. Many methods have been proposed to detect and segment 2D shapes, most of which involve template matching. Advanced segmentation techniques called Snakes or active contours have been used, considering deformable models or templates. The main purpose of this work is to apply segmentation techniques for the definition of 3D organs (anatomical structures) when big data information has been stored and must be organized by the doctors for medical diagnosis. The processes would be implemented in the CT images from patients with COVID-19.展开更多
Real-time prediction of the rock mass class in front of the tunnel face is essential for the adaptive adjustment of tunnel boring machines(TBMs).During the TBM tunnelling process,a large number of operation data are g...Real-time prediction of the rock mass class in front of the tunnel face is essential for the adaptive adjustment of tunnel boring machines(TBMs).During the TBM tunnelling process,a large number of operation data are generated,reflecting the interaction between the TBM system and surrounding rock,and these data can be used to evaluate the rock mass quality.This study proposed a stacking ensemble classifier for the real-time prediction of the rock mass classification using TBM operation data.Based on the Songhua River water conveyance project,a total of 7538 TBM tunnelling cycles and the corresponding rock mass classes are obtained after data preprocessing.Then,through the tree-based feature selection method,10 key TBM operation parameters are selected,and the mean values of the 10 selected features in the stable phase after removing outliers are calculated as the inputs of classifiers.The preprocessed data are randomly divided into the training set(90%)and test set(10%)using simple random sampling.Besides stacking ensemble classifier,seven individual classifiers are established as the comparison.These classifiers include support vector machine(SVM),k-nearest neighbors(KNN),random forest(RF),gradient boosting decision tree(GBDT),decision tree(DT),logistic regression(LR)and multilayer perceptron(MLP),where the hyper-parameters of each classifier are optimised using the grid search method.The prediction results show that the stacking ensemble classifier has a better performance than individual classifiers,and it shows a more powerful learning and generalisation ability for small and imbalanced samples.Additionally,a relative balance training set is obtained by the synthetic minority oversampling technique(SMOTE),and the influence of sample imbalance on the prediction performance is discussed.展开更多
In this paper, a time-varying rain characterization and diurnal variation in the Ku-band satellite systems simulated with synthetic storm techniques (SST) over a tropical location in Nigeria have been presented. Three...In this paper, a time-varying rain characterization and diurnal variation in the Ku-band satellite systems simulated with synthetic storm techniques (SST) over a tropical location in Nigeria have been presented. Three years’ rain rate time-series data measured by a raingauge located inside the Federal University of Technology Akure, Nigeria were utilized for the purpose of this work. The analysis is based on the CDF of one-minute rain rate;time-series simulated annual/seasonal and diurnal rain rate, rain attenuation statistics and fade margins observed over four time intervals: 00:00-06:00, 06:00-12:00, 12:00-18:00 and 18:00-24:00. In addition, comparison was also made between the synthesized values and rain attenuation statistics, at 12.245 GHz for a hypothetical downlink from EUTELSAT W4/W7 satellite in the area. It could be observed that at 99.99% link availability, the fade margin as high as ~20 dB may be required at Ku band uplink frequency bands in this area. We also observed that the communication downlinks working in the early morning and early to late in the evening hours must be compensated with an appropriate Down-Link Power Control (DLPC) for optimum performances during severe atmospheric influences in the region.展开更多
A P - vector method is optimized using the variational data assimilation technique(VDAT). The absolute geostrophic velocity fields in the vicinity of the Luzon Strait (LS) are calculated, the spatial structures and se...A P - vector method is optimized using the variational data assimilation technique(VDAT). The absolute geostrophic velocity fields in the vicinity of the Luzon Strait (LS) are calculated, the spatial structures and seasonal variations of the absolute geostrophic velocity field are investigated. Our results show that the Kuroshio enters the South China Sea (SCS) in the south and middle of the Luzon Strait and flows out in the north, so the Kuroshio makes a slight clockwise curve in the Luzon Strait, and the curve is strong in winter and weak in summer. During the winter, a westward current appears in the surface, and locates at the west of the Luzon Strait. It is the north part of a cyclonic gyre which exits in the northeast of the SCS; an anti-cyclonic gyre occurs on the intermediate level, and it exits in the northeast of the SCS, and an eastward current exits in the southeast of the anti-cyclonic gyre.展开更多
The Kehdolan area is located at 20 kilometers to the?south-east of Dozdozan Town (Eastern Azarbaijan Province). According to structural geology, volconic rocks are situated in Alborz-Azarbyjan zone, and faults?are?obs...The Kehdolan area is located at 20 kilometers to the?south-east of Dozdozan Town (Eastern Azarbaijan Province). According to structural geology, volconic rocks are situated in Alborz-Azarbyjan zone, and faults?are?observed?in?the?same direction to this system with SE-NW trend. The results show that kaolinite alteration trend with Argilic and propylitic veins?is the?same direction with SW-NE faults in this area. Therefore, these faults with these trends can be considered as the mineralization control for determination of the alterations. Different image processing techniques,?such as false color composite?(FCC), band ratios, color ratio composite?(CRC), principal component?analysis?(PCA), Crosta technique, supervised spectral angle mapping?(SAM), are used for?identification of the alteration zones associated with copper mineralization. In this project ASTER?data are process and spectral analysis to fit for recognizing intensity and kind of argillic, propylitic,?philic, and ETM+ data?which?are process and to fit for iron oxide and relation to metal mineralization of the area. For recognizing different alterations of the study area, some chemical and mineralogical analysis data from the samples showed that ASTER data and ETM+ data were?capable of hydrothermal alteration mapping with copper mineralization.?Copper mineralization in the region is in agreement with argillic alteration. SW-NE trending faults controlled the mineralization process.展开更多
Based on years of input from the four geodetic techniques (SLR, GPS, VLBI and DORIS), the strategies of the combination were studied in SHAO to generate a new global terrestrial reference frame as the material reali...Based on years of input from the four geodetic techniques (SLR, GPS, VLBI and DORIS), the strategies of the combination were studied in SHAO to generate a new global terrestrial reference frame as the material realization of the ITRS defined in IERS Conventions. The main input includes the time series of weekly solutions (or fortnightly for SLR 1983-1993) of observational data for satellite techniques and session-wise normal equations for VLBI. The set of estimated unknowns includes 3- dimensional Cartesian coordinates at the reference epoch 2005.0 of the stations distributed globally and their rates as well as the time series of consistent Earth Orientation Parameters (EOPs) at the same epochs as the input. Besides the final solution, namely SOL-2, generated by using all the inputs before 2015.0 obtained from short-term observation processing, another reference solution, namely SOL- 1, was also computed by using the input before 2009.0 based on the same combination of procedures for the purpose of comparison with ITRF2008 and DTRF2008 and for evaluating the effect of the latest six more years of data on the combined results. The estimated accuracy of the x-component and y-component of the SOL- 1 TRF-origin was better than 0.1 mm at epoch 2005.0 and better than 0.3 mm yr- 1 in time evolution, either compared with ITRF2008 or DTRF2008. However, the z-component of the translation parameters from SOL-1 to ITRF2008 and DTRF2008 were 3.4 mm and -1.0 ram, respectively. It seems that the z-component of the SOL-1 TRF-origin was much closer to the one in DTRF2008 than the one in ITRF2008. The translation parameters from SOL-2 to ITRF2014 were 2.2, -1.8 and 0.9 mm in the x-, y- and z-components respectively with rates smaller than 0.4 mmyr-1. Similarly, the scale factor transformed from SOL-1 to DTRF2008 was much smaller than that to ITRF2008. The scale parameter from SOL-2 to ITRF2014 was -0.31 ppb with a rate lower than 0.01 ppb yr-1. The external precision (WRMS) compared with IERS EOP 08 C04 of the combined EOP series was smaller than 0.06 mas for the polar motions, smaller than 0.01 ms for the UT1-UTC and smaller than 0.02 ms for the LODs. The precision of the EOPs in SOL-2 was slightly higher than that of SOL-1.展开更多
A P-vector method was optimized using variational data assimilation technique, with which the vertical structures and seasonal variations of zonal velocities and transports were investigated. The results showed that w...A P-vector method was optimized using variational data assimilation technique, with which the vertical structures and seasonal variations of zonal velocities and transports were investigated. The results showed that westward and eastward flowes occur in the Luzon Strait in the same period in a year. However the net volume transport is westward. In the upper level (0m -500m),the westward flow exits in the middle and south of the Luzon Strait, and the eastward flow exits in the north. There are two centers of westward flow and one center of eastward flow. In the middle of the Luzon Strait, westward and eastward flowes appear alternately in vertical direction. The westward flow strengthens in winter and weakens in summer. The net volume transport is strong in winter (5.53 Sv) but weak in summer (0.29 Sv). Except in summer, the volume transport in the upper level accounts for more than half of the total volume transport (0m bottom). In summer, the net volume transport in the upper level is eastward (1.01 Sv), but westward underneath.展开更多
The initial motivation of the lifting technique is to solve the H∞control problems. However, the conventional weighted H∞design does not meet the conditions required by lifting, so the result often leads to a misjud...The initial motivation of the lifting technique is to solve the H∞control problems. However, the conventional weighted H∞design does not meet the conditions required by lifting, so the result often leads to a misjudgement of the design. Two conditions required by using the lifting technique are presented based on the basic formulae of the lifting. It is pointed out that only the H∞disturbance attenuation problem with no weighting functions can meet these conditions, hence, the application of the lifting technique is quite limited.展开更多
In this paper, three techniques, line run coding, quadtree DF (Depth-First) representation and H coding for compressing classified satellite cloud images with no distortion are presented. In these three codings, the f...In this paper, three techniques, line run coding, quadtree DF (Depth-First) representation and H coding for compressing classified satellite cloud images with no distortion are presented. In these three codings, the first two were invented by other persons and the third one, by ourselves. As a result, the comparison among their compression rates is. given at the end of this paper. Further application of these image compression technique to satellite data and other meteorological data looks promising.展开更多
Rapid advancements of the Industrial Internet of Things(IIoT)and artificial intelligence(AI)pose serious security issues by revealing secret data.Therefore,security data becomes a crucial issue in IIoT communication w...Rapid advancements of the Industrial Internet of Things(IIoT)and artificial intelligence(AI)pose serious security issues by revealing secret data.Therefore,security data becomes a crucial issue in IIoT communication where secrecy needs to be guaranteed in real time.Practically,AI techniques can be utilized to design image steganographic techniques in IIoT.In addition,encryption techniques act as an important role to save the actual information generated from the IIoT devices to avoid unauthorized access.In order to accomplish secure data transmission in IIoT environment,this study presents novel encryption with image steganography based data hiding technique(EISDHT)for IIoT environment.The proposed EIS-DHT technique involves a new quantum black widow optimization(QBWO)to competently choose the pixel values for hiding secrete data in the cover image.In addition,the multi-level discrete wavelet transform(DWT)based transformation process takes place.Besides,the secret image is divided into three R,G,and B bands which are then individually encrypted using Blowfish,Twofish,and Lorenz Hyperchaotic System.At last,the stego image gets generated by placing the encrypted images into the optimum pixel locations of the cover image.In order to validate the enhanced data hiding performance of the EIS-DHT technique,a set of simulation analyses take place and the results are inspected interms of different measures.The experimental outcomes stated the supremacy of the EIS-DHT technique over the other existing techniques and ensure maximum security.展开更多
Machine-type communication (MTC) devices provide a broad range of data collection especially on the massive data generated environments such as urban, industrials and event-enabled areas. In dense deployments, the dat...Machine-type communication (MTC) devices provide a broad range of data collection especially on the massive data generated environments such as urban, industrials and event-enabled areas. In dense deployments, the data collected at the closest locations between the MTC devices are spatially correlated. In this paper, we propose a k-means grouping technique to combine all MTC devices based on spatially correlated. The MTC devices collect the data on the event-based area and then transmit to the centralized aggregator for processing and computing. With the limitation of computational resources at the centralized aggregator, some grouped MTC devices data offloaded to the nearby base station collocated with the mobile edge-computing server. As a sensing capability adopted on MTC devices, we use a power exponential function model to compute a correlation coefficient existing between the MTC devices. Based on this framework, we compare the energy consumption when all data processed locally at centralized aggregator or offloaded at mobile edge computing server with optimal solution obtained by the brute force method. Then, the simulation results revealed that the proposed k-means grouping technique reduce the energy consumption at centralized aggregator while satisfying the required completion time.展开更多
文摘In light of the rapid growth and development of social media, it has become the focus of interest in many different scientific fields. They seek to extract useful information from it, and this is called (knowledge), such as extracting information related to people’s behaviors and interactions to analyze feelings or understand the behavior of users or groups, and many others. This extracted knowledge has a very important role in decision-making, creating and improving marketing objectives and competitive advantage, monitoring events, whether political or economic, and development in all fields. Therefore, to extract this knowledge, we need to analyze the vast amount of data found within social media using the most popular data mining techniques and applications related to social media sites.
基金Science Foundation Ireland(SFI)under Grant Number SFI/16/RC/3918(Confirm)and Marie Sklodowska Curie Grant agreement No.847577 co-fundedthe European Regional Development Fund.Wasif Afzal has received funding from the European Union’s Horizon 2020 research and innovation program under CMC,2023,vol.74,no.22767 Grant agreement Nos.871319,957212from the ECSEL Joint Undertaking(JU)under Grant agreement No 101007350.
文摘Big Data is reforming many industrial domains by providing decision support through analyzing large data volumes.Big Data testing aims to ensure that Big Data systems run smoothly and error-free while maintaining the performance and quality of data.However,because of the diversity and complexity of data,testing Big Data is challenging.Though numerous research efforts deal with Big Data testing,a comprehensive review to address testing techniques and challenges of BigData is not available as yet.Therefore,we have systematically reviewed the Big Data testing techniques’evidence occurring in the period 2010–2021.This paper discusses testing data processing by highlighting the techniques used in every processing phase.Furthermore,we discuss the challenges and future directions.Our findings show that diverse functional,non-functional and combined(functional and non-functional)testing techniques have been used to solve specific problems related to Big Data.At the same time,most of the testing challenges have been faced during the MapReduce validation phase.In addition,the combinatorial testing technique is one of the most applied techniques in combination with other techniques(i.e.,random testing,mutation testing,input space partitioning and equivalence testing)to find various functional faults through Big Data testing.
文摘In this article, the relationship between the knowledge of competitors and the development of new products in the field of capital medical equipment has been investigated. In order to identify the criteria for measuring competitors’ knowledge and developing new capital medical equipment products, marketing experts were interviewed and then a researcher-made questionnaire was compiled and distributed among the statistical sample of the research. Also, in order to achieve the goals of the research, a questionnaire among 100 members of the statistical community was selected, distributed and collected. To analyze the gathered data, the structural equation modeling (SEM) method was used in the SMART PLS 2 software to estimate the model and then the K-MEAN approach was used to cluster the capital medical equipment market based on the knowledge of actual and potential competitors. The results have shown that the knowledge of potential and actual competitors has a positive and significant effect on the development of new products in the capital medical equipment market. From the point of view of the knowledge of actual competitors, the market of “MRI”, “Ultrasound” and “SPECT” is grouped in the low knowledge cluster;“Pet MRI”, “CT Scan”, “Mammography”, “Radiography, Fluoroscopy and CRM”, “Pet CT”, “SPECT CT” and “Gamma Camera” markets are clustered in the medium knowledge. Finally, “Angiography” and “CBCT” markets are located in the knowledge cluster. From the perspective of knowledge of potential competitors, the market of “angiography”, “mammography”, “SPECT” and “SPECT CT” in the low knowledge cluster, “CT scan”, “radiography, fluoroscopy and CRM”, “pet CT”, “CBCT” markets in the medium knowledge cluster and “MRI”, “pet MRI”, “ultrasound” and “gamma camera” markets in the high knowledge cluster are located.
基金the support given by the government and official in Guangnan Countyfunded by[National Natural Science Foundation of China]grant number[41361020,40961031]+3 种基金[Joint Fund of Yunnan Provincial Science and Technology Department and Yunnan University]grant number[2018FY001(-017)][Project of Innovative Talents Cultivation for Graduate Students of Yunnan University]grant number[C176230200][Project of Internationalization and Cultural Inheritance and Innovation of Yunnan University]grant number[C176250202][Science Research Fund of Yunnan Provincial Education Department in 2020:Postgraduate]grant number[2020Y0030]。
文摘The karst mountainous area is an ecologically fragile region with prominent humanland contradictions.The resource-environment carrying capacity(RECC)of this region needs to be further clarified.The development of remote sensing(RS)and geographic information system(GIS)provides data sources and processing platform for RECC monitoring.This study analyzed and established the evaluation index system of RECC by considering particularity in the karst mountainous area of Southwest China;processed multisource RS data(Sentinel-2,Aster-DEM and Landsat-8)to extract the spatial distributions of nine key indexes by GIS techniques(information classification,overlay analysis and raster calculation);proposed the methods of index integration and fuzzy comprehensive evaluation of the RECC by GIS;and took a typical area,Guangnan County in Yunnan Province of China,as an experimental area to explore the effectiveness of the indexes and methods.The results showed that:(1)The important indexes affecting the RECC of karst mountainous area are water resources,tourism resources,position resources,geographical environment and soil erosion environment.(2)Data on cultivated land,construction land,minerals,transportation,water conservancy,ecosystem services,topography,soil erosion and rocky desertification can be obtained from RS data.GIS techniques integrate the information into the RECC results.The data extraction and processing methods are feasible on evaluating RECC.(3)The RECC of Guangnan County was in the mid-carrying level in 2018.The midcarrying and low-carrying levels were the main types,accounting for more than 80.00%of the total study area.The areas with high carrying capacity were mainly distributed in the northern regions of the northwest-southeast line of the county,and other areas have a low carrying capacity comparatively.The coordination between regional resource-environment status and socioeconomic development is the key to improve RECC.This study explores the evaluation index system of RECC in karst mountainous area and the application of multisource RS data and GIS techniques in the comprehensive evaluation.The methods can be applied in related fields to provide suggestions for data/information extraction and integration,and sustainable development.
基金funded by the National Natural Science Foundation of China(NSFC,Nos.12373086 and 12303082)CAS“Light of West China”Program+2 种基金Yunnan Revitalization Talent Support Program in Yunnan ProvinceNational Key R&D Program of ChinaGravitational Wave Detection Project No.2022YFC2203800。
文摘Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.
文摘Beginning from the proposition that availability of reliable data is necessary to the application of nuclear techniques, we explore the questions of how such data are obtained and how the extent of their reliability is ascertained. These questions are considered first in general terms in relation to data types and organizational frameworks, then with particular reference to the journal Atomic Data and Nuclear Data Tables. The reliability issue is further discussed in terms of this journal’s policies and unique presentation style.
基金supported by the National Key R&D Program of China Nos.2021YFC2203502 and 2022YFF0711502the National Natural Science Foundation of China(NSFC)(12173077 and 12003062)+5 种基金the Tianshan Innovation Team Plan of Xinjiang Uygur Autonomous Region(2022D14020)the Tianshan Talent Project of Xinjiang Uygur Autonomous Region(2022TSYCCX0095)the Scientific Instrument Developing Project of the Chinese Academy of Sciences,grant No.PTYQ2022YZZD01China National Astronomical Data Center(NADC)the Operation,Maintenance and Upgrading Fund for Astronomical Telescopes and Facility Instruments,budgeted from the Ministry of Finance of China(MOF)and administrated by the Chinese Academy of Sciences(CAS)Natural Science Foundation of Xinjiang Uygur Autonomous Region(2022D01A360)。
文摘To address the problem of real-time processing of ultra-wide bandwidth pulsar baseband data,we designed and implemented a pulsar baseband data processing algorithm(PSRDP)based on GPU parallel computing technology.PSRDP can perform operations such as baseband data unpacking,channel separation,coherent dedispersion,Stokes detection,phase and folding period prediction,and folding integration in GPU clusters.We tested the algorithm using the J0437-4715 pulsar baseband data generated by the CASPSR and Medusa backends of the Parkes,and the J0332+5434 pulsar baseband data generated by the self-developed backend of the Nan Shan Radio Telescope.We obtained the pulse profiles of each baseband data.Through experimental analysis,we have found that the pulse profiles generated by the PSRDP algorithm in this paper are essentially consistent with the processing results of Digital Signal Processing Software for Pulsar Astronomy(DSPSR),which verified the effectiveness of the PSRDP algorithm.Furthermore,using the same baseband data,we compared the processing speed of PSRDP with DSPSR,and the results showed that PSRDP was not slower than DSPSR in terms of speed.The theoretical and technical experience gained from the PSRDP algorithm research in this article lays a technical foundation for the real-time processing of QTT(Qi Tai radio Telescope)ultra-wide bandwidth pulsar baseband data.
文摘This study aims to assess and to evaluate band ratios, brovey and HSV (Hue-Saturation-Value) techniques for discrimination and mapping the basement rock units exposed at Wadi Bulghah area, Saudi Arabia using multispectral Landsat ETM+ and SPOT-5 panchromatic data.?FieldSpec instrument is utilized to collect the spectral data of diorite, marble, gossan and volcanics, the main rock units exposed at the study area. Spectral profile of diorite exhibits very distinguished absorption features around 2.20 μm and 2.35 μm wavelength regions. These absorption features lead to lowering the band ratio values within the band-7 wavelength region. Diorite intrusions appear to have grey and dark grey image signatures on 7/3 and 7/2 band ratio images respectively. On the false color composite ratio image (7/3:R;7/2:G and 5/2:B), diorite, marble, gossan and volcanics have very dark brown, dark blue, white and yellowish brown image signatures respectively. Image fusion between previously mentioned FCC ratio image and high spatial resolution (5 meters) SPOT-5 panchromatic image is carried out by using brovey and HSV transformation methods. Visual and statistical assessment methods prove that HSV fused image yields best image interpretability results rather than brovey image. It improves the spatial resolution of the original FCC ratios image with acceptable spectral preservation.
文摘Quantitative analysis of digital images requires detection and segmentation of the borders of the object of interest. Accurate segmentation is required for volume determination, 3D rendering, radiation therapy, and surgery planning. In medical images, segmentation has traditionally been done by human experts. Substantial computational and storage requirements become especially acute when object orientation and scale have to be considered. Therefore, automated or semi-automated segmentation techniques are essential if these software applications are ever to gain widespread clinical use. Many methods have been proposed to detect and segment 2D shapes, most of which involve template matching. Advanced segmentation techniques called Snakes or active contours have been used, considering deformable models or templates. The main purpose of this work is to apply segmentation techniques for the definition of 3D organs (anatomical structures) when big data information has been stored and must be organized by the doctors for medical diagnosis. The processes would be implemented in the CT images from patients with COVID-19.
基金funded by the National Natural Science Foundation of China(Grant No.41941019)the State Key Laboratory of Hydroscience and Engineering(Grant No.2019-KY-03)。
文摘Real-time prediction of the rock mass class in front of the tunnel face is essential for the adaptive adjustment of tunnel boring machines(TBMs).During the TBM tunnelling process,a large number of operation data are generated,reflecting the interaction between the TBM system and surrounding rock,and these data can be used to evaluate the rock mass quality.This study proposed a stacking ensemble classifier for the real-time prediction of the rock mass classification using TBM operation data.Based on the Songhua River water conveyance project,a total of 7538 TBM tunnelling cycles and the corresponding rock mass classes are obtained after data preprocessing.Then,through the tree-based feature selection method,10 key TBM operation parameters are selected,and the mean values of the 10 selected features in the stable phase after removing outliers are calculated as the inputs of classifiers.The preprocessed data are randomly divided into the training set(90%)and test set(10%)using simple random sampling.Besides stacking ensemble classifier,seven individual classifiers are established as the comparison.These classifiers include support vector machine(SVM),k-nearest neighbors(KNN),random forest(RF),gradient boosting decision tree(GBDT),decision tree(DT),logistic regression(LR)and multilayer perceptron(MLP),where the hyper-parameters of each classifier are optimised using the grid search method.The prediction results show that the stacking ensemble classifier has a better performance than individual classifiers,and it shows a more powerful learning and generalisation ability for small and imbalanced samples.Additionally,a relative balance training set is obtained by the synthetic minority oversampling technique(SMOTE),and the influence of sample imbalance on the prediction performance is discussed.
文摘In this paper, a time-varying rain characterization and diurnal variation in the Ku-band satellite systems simulated with synthetic storm techniques (SST) over a tropical location in Nigeria have been presented. Three years’ rain rate time-series data measured by a raingauge located inside the Federal University of Technology Akure, Nigeria were utilized for the purpose of this work. The analysis is based on the CDF of one-minute rain rate;time-series simulated annual/seasonal and diurnal rain rate, rain attenuation statistics and fade margins observed over four time intervals: 00:00-06:00, 06:00-12:00, 12:00-18:00 and 18:00-24:00. In addition, comparison was also made between the synthesized values and rain attenuation statistics, at 12.245 GHz for a hypothetical downlink from EUTELSAT W4/W7 satellite in the area. It could be observed that at 99.99% link availability, the fade margin as high as ~20 dB may be required at Ku band uplink frequency bands in this area. We also observed that the communication downlinks working in the early morning and early to late in the evening hours must be compensated with an appropriate Down-Link Power Control (DLPC) for optimum performances during severe atmospheric influences in the region.
基金This work was supported by the knowledge Innovation Project of the Chinese Academy of Sciences under contract Grant No. KZCX2- 205) the National Natural Science Foundation of China under contract Grand No. 40106002.
文摘A P - vector method is optimized using the variational data assimilation technique(VDAT). The absolute geostrophic velocity fields in the vicinity of the Luzon Strait (LS) are calculated, the spatial structures and seasonal variations of the absolute geostrophic velocity field are investigated. Our results show that the Kuroshio enters the South China Sea (SCS) in the south and middle of the Luzon Strait and flows out in the north, so the Kuroshio makes a slight clockwise curve in the Luzon Strait, and the curve is strong in winter and weak in summer. During the winter, a westward current appears in the surface, and locates at the west of the Luzon Strait. It is the north part of a cyclonic gyre which exits in the northeast of the SCS; an anti-cyclonic gyre occurs on the intermediate level, and it exits in the northeast of the SCS, and an eastward current exits in the southeast of the anti-cyclonic gyre.
文摘The Kehdolan area is located at 20 kilometers to the?south-east of Dozdozan Town (Eastern Azarbaijan Province). According to structural geology, volconic rocks are situated in Alborz-Azarbyjan zone, and faults?are?observed?in?the?same direction to this system with SE-NW trend. The results show that kaolinite alteration trend with Argilic and propylitic veins?is the?same direction with SW-NE faults in this area. Therefore, these faults with these trends can be considered as the mineralization control for determination of the alterations. Different image processing techniques,?such as false color composite?(FCC), band ratios, color ratio composite?(CRC), principal component?analysis?(PCA), Crosta technique, supervised spectral angle mapping?(SAM), are used for?identification of the alteration zones associated with copper mineralization. In this project ASTER?data are process and spectral analysis to fit for recognizing intensity and kind of argillic, propylitic,?philic, and ETM+ data?which?are process and to fit for iron oxide and relation to metal mineralization of the area. For recognizing different alterations of the study area, some chemical and mineralogical analysis data from the samples showed that ASTER data and ETM+ data were?capable of hydrothermal alteration mapping with copper mineralization.?Copper mineralization in the region is in agreement with argillic alteration. SW-NE trending faults controlled the mineralization process.
基金supported by the Ministry of Science and Technology of China(2015FY310200)the National Key Research and Development Program of China(2016YFB0501405)+1 种基金the National Natural Science Foundation of China(11173048 and 11403076)the State Key Laboratory of Aerospace Dynamics and the Crustal Movement Observation Network of China(CMONOC)
文摘Based on years of input from the four geodetic techniques (SLR, GPS, VLBI and DORIS), the strategies of the combination were studied in SHAO to generate a new global terrestrial reference frame as the material realization of the ITRS defined in IERS Conventions. The main input includes the time series of weekly solutions (or fortnightly for SLR 1983-1993) of observational data for satellite techniques and session-wise normal equations for VLBI. The set of estimated unknowns includes 3- dimensional Cartesian coordinates at the reference epoch 2005.0 of the stations distributed globally and their rates as well as the time series of consistent Earth Orientation Parameters (EOPs) at the same epochs as the input. Besides the final solution, namely SOL-2, generated by using all the inputs before 2015.0 obtained from short-term observation processing, another reference solution, namely SOL- 1, was also computed by using the input before 2009.0 based on the same combination of procedures for the purpose of comparison with ITRF2008 and DTRF2008 and for evaluating the effect of the latest six more years of data on the combined results. The estimated accuracy of the x-component and y-component of the SOL- 1 TRF-origin was better than 0.1 mm at epoch 2005.0 and better than 0.3 mm yr- 1 in time evolution, either compared with ITRF2008 or DTRF2008. However, the z-component of the translation parameters from SOL-1 to ITRF2008 and DTRF2008 were 3.4 mm and -1.0 ram, respectively. It seems that the z-component of the SOL-1 TRF-origin was much closer to the one in DTRF2008 than the one in ITRF2008. The translation parameters from SOL-2 to ITRF2014 were 2.2, -1.8 and 0.9 mm in the x-, y- and z-components respectively with rates smaller than 0.4 mmyr-1. Similarly, the scale factor transformed from SOL-1 to DTRF2008 was much smaller than that to ITRF2008. The scale parameter from SOL-2 to ITRF2014 was -0.31 ppb with a rate lower than 0.01 ppb yr-1. The external precision (WRMS) compared with IERS EOP 08 C04 of the combined EOP series was smaller than 0.06 mas for the polar motions, smaller than 0.01 ms for the UT1-UTC and smaller than 0.02 ms for the LODs. The precision of the EOPs in SOL-2 was slightly higher than that of SOL-1.
基金Supported by the Major State Basic Research Program (No. G1999043810) Open Laboratory for Tropical Marine Environmental Dynamics (LED)+2 种基金 South China Sea Institute of Oceanology Chinese Academy of Sciences and the NSFC (No. 40306004).
文摘A P-vector method was optimized using variational data assimilation technique, with which the vertical structures and seasonal variations of zonal velocities and transports were investigated. The results showed that westward and eastward flowes occur in the Luzon Strait in the same period in a year. However the net volume transport is westward. In the upper level (0m -500m),the westward flow exits in the middle and south of the Luzon Strait, and the eastward flow exits in the north. There are two centers of westward flow and one center of eastward flow. In the middle of the Luzon Strait, westward and eastward flowes appear alternately in vertical direction. The westward flow strengthens in winter and weakens in summer. The net volume transport is strong in winter (5.53 Sv) but weak in summer (0.29 Sv). Except in summer, the volume transport in the upper level accounts for more than half of the total volume transport (0m bottom). In summer, the net volume transport in the upper level is eastward (1.01 Sv), but westward underneath.
基金Supported by the Harbin Engineering University Fund for Basic Projects (heuft06041)
文摘The initial motivation of the lifting technique is to solve the H∞control problems. However, the conventional weighted H∞design does not meet the conditions required by lifting, so the result often leads to a misjudgement of the design. Two conditions required by using the lifting technique are presented based on the basic formulae of the lifting. It is pointed out that only the H∞disturbance attenuation problem with no weighting functions can meet these conditions, hence, the application of the lifting technique is quite limited.
文摘In this paper, three techniques, line run coding, quadtree DF (Depth-First) representation and H coding for compressing classified satellite cloud images with no distortion are presented. In these three codings, the first two were invented by other persons and the third one, by ourselves. As a result, the comparison among their compression rates is. given at the end of this paper. Further application of these image compression technique to satellite data and other meteorological data looks promising.
基金This research work was funded by Institution Fund projects under Grant No.(IFPRC-215-249-2020)Therefore,authors gratefully acknowledge technical and financial support from the Ministry of Education and King Abdulaziz University,DSR,Jeddah,Saudi Arabia.
文摘Rapid advancements of the Industrial Internet of Things(IIoT)and artificial intelligence(AI)pose serious security issues by revealing secret data.Therefore,security data becomes a crucial issue in IIoT communication where secrecy needs to be guaranteed in real time.Practically,AI techniques can be utilized to design image steganographic techniques in IIoT.In addition,encryption techniques act as an important role to save the actual information generated from the IIoT devices to avoid unauthorized access.In order to accomplish secure data transmission in IIoT environment,this study presents novel encryption with image steganography based data hiding technique(EISDHT)for IIoT environment.The proposed EIS-DHT technique involves a new quantum black widow optimization(QBWO)to competently choose the pixel values for hiding secrete data in the cover image.In addition,the multi-level discrete wavelet transform(DWT)based transformation process takes place.Besides,the secret image is divided into three R,G,and B bands which are then individually encrypted using Blowfish,Twofish,and Lorenz Hyperchaotic System.At last,the stego image gets generated by placing the encrypted images into the optimum pixel locations of the cover image.In order to validate the enhanced data hiding performance of the EIS-DHT technique,a set of simulation analyses take place and the results are inspected interms of different measures.The experimental outcomes stated the supremacy of the EIS-DHT technique over the other existing techniques and ensure maximum security.
文摘Machine-type communication (MTC) devices provide a broad range of data collection especially on the massive data generated environments such as urban, industrials and event-enabled areas. In dense deployments, the data collected at the closest locations between the MTC devices are spatially correlated. In this paper, we propose a k-means grouping technique to combine all MTC devices based on spatially correlated. The MTC devices collect the data on the event-based area and then transmit to the centralized aggregator for processing and computing. With the limitation of computational resources at the centralized aggregator, some grouped MTC devices data offloaded to the nearby base station collocated with the mobile edge-computing server. As a sensing capability adopted on MTC devices, we use a power exponential function model to compute a correlation coefficient existing between the MTC devices. Based on this framework, we compare the energy consumption when all data processed locally at centralized aggregator or offloaded at mobile edge computing server with optimal solution obtained by the brute force method. Then, the simulation results revealed that the proposed k-means grouping technique reduce the energy consumption at centralized aggregator while satisfying the required completion time.