Clustering is a group of unsupervised statistical techniques commonly used in many disciplines. Considering their applications to fish abundance data, many technical details need to be considered to ensure reasonable ...Clustering is a group of unsupervised statistical techniques commonly used in many disciplines. Considering their applications to fish abundance data, many technical details need to be considered to ensure reasonable interpretation. However, the reliability and stability of the clustering methods have rarely been studied in the contexts of fisheries. This study presents an intensive evaluation of three common clustering methods, including hierarchical clustering(HC), K-means(KM), and expectation-maximization(EM) methods, based on fish community surveys in the coastal waters of Shandong, China. We evaluated the performances of these three methods considering different numbers of clusters, data size, and data transformation approaches, focusing on the consistency validation using the index of average proportion of non-overlap(APN). The results indicate that the three methods tend to be inconsistent in the optimal number of clusters. EM showed relatively better performances to avoid unbalanced classification, whereas HC and KM provided more stable clustering results. Data transformation including scaling, square-root, and log-transformation had substantial influences on the clustering results, especially for KM. Moreover, transformation also influenced clustering stability, wherein scaling tended to provide a stable solution at the same number of clusters. The APN values indicated improved stability with increasing data size, and the effect leveled off over 70 samples in general and most quickly in EM. We conclude that the best clustering method can be chosen depending on the aim of the study and the number of clusters. In general, KM is relatively robust in our tests. We also provide recommendations for future application of clustering analyses. This study is helpful to ensure the credibility of the application and interpretation of clustering methods.展开更多
This paper focuses on the integration and data transformation between GPS and totalstation.It emphasizes on the way to transfer the WGS84 Cartesian coordinates to the local two_dimensional plane coordinates and the or...This paper focuses on the integration and data transformation between GPS and totalstation.It emphasizes on the way to transfer the WGS84 Cartesian coordinates to the local two_dimensional plane coordinates and the orthometric height GPS receiver,totalstation,radio,notebook computer and the corresponding software work together to form a new surveying system,the super_totalstation positioning system(SPS) and a new surveying model for terrestrial surveying.With the help of this system,the positions of detail points can be measured.展开更多
Nowadays the side-looking SAR echo data can be obtained easily from the commercial channel, while that of other SAR imaging modes such as squint, spotlight are difficult to be acquired. This paper presents a new schem...Nowadays the side-looking SAR echo data can be obtained easily from the commercial channel, while that of other SAR imaging modes such as squint, spotlight are difficult to be acquired. This paper presents a new scheme to transform the side-looking returns to squint ones, in a direct and an indirect approach respectively. Direct transformation uses the data with a wide azimuth beam angle. The maximum of the required squint angle is limited under several degrees. Squint data under indirect transformation can be obtained by adding a platform velocity along slant range according to the required squint angles. Then the squint data is determined by the angle between the new forward velocity and line-of-sight direction. This method results in higher squint angle compared with the first one. Verification shows the feasibility of these approaches with illustration of side-looking E-SAR raw data processing. The future work will be on the precise Doppler centroid estimation and effective imaging algorithm development.展开更多
Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal depende...Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal dependence,and noise.Therefore,methodologies for data augmentation and conversion of time series data into images for analysis have been studied.This paper proposes a fault detection model that uses time series data augmentation and transformation to address the problems of data imbalance,temporal dependence,and robustness to noise.The method of data augmentation is set as the addition of noise.It involves adding Gaussian noise,with the noise level set to 0.002,to maximize the generalization performance of the model.In addition,we use the Markov Transition Field(MTF)method to effectively visualize the dynamic transitions of the data while converting the time series data into images.It enables the identification of patterns in time series data and assists in capturing the sequential dependencies of the data.For anomaly detection,the PatchCore model is applied to show excellent performance,and the detected anomaly areas are represented as heat maps.It allows for the detection of anomalies,and by applying an anomaly map to the original image,it is possible to capture the areas where anomalies occur.The performance evaluation shows that both F1-score and Accuracy are high when time series data is converted to images.Additionally,when processed as images rather than as time series data,there was a significant reduction in both the size of the data and the training time.The proposed method can provide an important springboard for research in the field of anomaly detection using time series data.Besides,it helps solve problems such as analyzing complex patterns in data lightweight.展开更多
Automatic visualization generates meaningful visualizations to support data analysis and pattern finding for novice or casual users who are not familiar with visualization design.Current automatic visualization approa...Automatic visualization generates meaningful visualizations to support data analysis and pattern finding for novice or casual users who are not familiar with visualization design.Current automatic visualization approaches adopt mainly aggregation and filtering to extract patterns from the original data.However,these limited data transformations fail to capture complex patterns such as clusters and correlations.Although recent advances in feature engineering provide the potential for more kinds of automatic data transformations,the auto-generated transformations lack explainability concerning how patterns are connected with the original features.To tackle these challenges,we propose a novel explainable recommendation approach for extended kinds of data transformations in automatic visualization.We summarize the space of feasible data transformations and measures on explainability of transformation operations with a literature review and a pilot study,respectively.A recommendation algorithm is designed to compute optimal transformations,which can reveal specified types of patterns and maintain explainability.We demonstrate the effectiveness of our approach through two cases and a user study.展开更多
针对图像识别中获取全局特征的局限性及难以提升识别准确性的问题,提出一种基于随机增强Swin-Tiny Transformer轻量级模型的图像识别方法.该方法在预处理阶段结合基于随机数据增强(random data augmentation based enhancement,RDABE)...针对图像识别中获取全局特征的局限性及难以提升识别准确性的问题,提出一种基于随机增强Swin-Tiny Transformer轻量级模型的图像识别方法.该方法在预处理阶段结合基于随机数据增强(random data augmentation based enhancement,RDABE)算法对图像特征进行增强,并采用Transformer的自注意力机制,以获得更全面的高层视觉语义信息.通过在玉米病害数据集上优化Swin-Tiny Transformer模型并进行参数微调,在农业领域的玉米病害上验证了该算法的适用性,实现了更精确的病害检测.实验结果表明,基于随机增强的轻量级Swin-Tiny+RDABE模型对玉米病害图像识别准确率达93.5867%.在参数权重一致,与性能优秀的轻量级Transformer、卷积神经网络(CNN)系列模型对比的实验结果表明,改进的模型准确率比Swin-Tiny Transformer,Deit3_Small,Vit_Small,Mobilenet_V3_Small,ShufflenetV2和Efficientnet_B1_Pruned模型提高了1.1877%~4.9881%,且能迅速收敛.展开更多
Data transformation is the core process in migrating database from relational database to NoSQL database such as column-oriented database. However,there is no standard guideline for data transformation from relationa...Data transformation is the core process in migrating database from relational database to NoSQL database such as column-oriented database. However,there is no standard guideline for data transformation from relational database toNoSQL database. A number of schema transformation techniques have been proposed to improve data transformation process and resulted better query processingtime when compared to the relational database query processing time. However,these approaches produced redundant tables in the resulted schema that in turnconsume large unnecessary storage size and produce high query processing timedue to the generated schema with redundant column families in the transformedcolumn-oriented database. In this paper, an efficient data transformation techniquefrom relational database to column-oriented database is proposed. The proposedschema transformation technique is based on the combination of denormalizationapproach, data access pattern and multiple-nested schema. In order to validate theproposed work, the proposed technique is implemented by transforming data fromMySQL database to MongoDB database. A benchmark transformation techniqueis also performed in which the query processing time and the storage size arecompared. Based on the experimental results, the proposed transformation technique showed significant improvement in terms query processing time and storagespace usage due to the reduced number of column families in the column-orienteddatabase.展开更多
This study presents a comparative analysis of two image enhancement techniques, Continuous Wavelet Transform (CWT) and Fast Fourier Transform (FFT), in the context of improving the clarity of high-quality 3D seismic d...This study presents a comparative analysis of two image enhancement techniques, Continuous Wavelet Transform (CWT) and Fast Fourier Transform (FFT), in the context of improving the clarity of high-quality 3D seismic data obtained from the Tano Basin in West Africa, Ghana. The research focuses on a comparative analysis of image clarity in seismic attribute analysis to facilitate the identification of reservoir features within the subsurface structures. The findings of the study indicate that CWT has a significant advantage over FFT in terms of image quality and identifying subsurface structures. The results demonstrate the superior performance of CWT in providing a better representation, making it more effective for seismic attribute analysis. The study highlights the importance of choosing the appropriate image enhancement technique based on the specific application needs and the broader context of the study. While CWT provides high-quality images and superior performance in identifying subsurface structures, the selection between these methods should be made judiciously, taking into account the objectives of the study and the characteristics of the signals being analyzed. The research provides valuable insights into the decision-making process for selecting image enhancement techniques in seismic data analysis, helping researchers and practitioners make informed choices that cater to the unique requirements of their studies. Ultimately, this study contributes to the advancement of the field of subsurface imaging and geological feature identification.展开更多
为解决临床医学量表数据类别不均衡容易对模型产生影响,以及在处理量表数据任务时深度学习框架性能难以媲美传统机器学习方法问题,提出了一种基于级联欠采样的Transformer网络模型(layer by layer Transformer,LLT)。LLT通过级联欠采样...为解决临床医学量表数据类别不均衡容易对模型产生影响,以及在处理量表数据任务时深度学习框架性能难以媲美传统机器学习方法问题,提出了一种基于级联欠采样的Transformer网络模型(layer by layer Transformer,LLT)。LLT通过级联欠采样方法对多数类数据逐层删减,实现数据类别平衡,降低数据类别不均衡对分类器的影响,并利用注意力机制对输入数据的特征进行相关性评估实现特征选择,细化特征提取能力,改善模型性能。采用类风湿关节炎(RA)数据作为测试样本,实验证明,在不改变样本分布的情况下,提出的级联欠采样方法对少数类别的识别率增加了6.1%,与常用的NEARMISS和ADASYN相比,分别高出1.4%和10.4%;LLT在RA量表数据的准确率和F 1-score指标上达到了72.6%和71.5%,AUC值为0.89,mAP值为0.79,性能超过目前RF、XGBoost和GBDT等主流量表数据分类模型。最后对模型过程进行可视化,分析了影响RA的特征,对RA临床诊断具有较好的指导意义。展开更多
基金provided by the Marine S&T Fund of Shandong Province for Pilot National Laboratory for Marine Science and Technology (Qingdao) (No.2018SDKJ0501-2)。
文摘Clustering is a group of unsupervised statistical techniques commonly used in many disciplines. Considering their applications to fish abundance data, many technical details need to be considered to ensure reasonable interpretation. However, the reliability and stability of the clustering methods have rarely been studied in the contexts of fisheries. This study presents an intensive evaluation of three common clustering methods, including hierarchical clustering(HC), K-means(KM), and expectation-maximization(EM) methods, based on fish community surveys in the coastal waters of Shandong, China. We evaluated the performances of these three methods considering different numbers of clusters, data size, and data transformation approaches, focusing on the consistency validation using the index of average proportion of non-overlap(APN). The results indicate that the three methods tend to be inconsistent in the optimal number of clusters. EM showed relatively better performances to avoid unbalanced classification, whereas HC and KM provided more stable clustering results. Data transformation including scaling, square-root, and log-transformation had substantial influences on the clustering results, especially for KM. Moreover, transformation also influenced clustering stability, wherein scaling tended to provide a stable solution at the same number of clusters. The APN values indicated improved stability with increasing data size, and the effect leveled off over 70 samples in general and most quickly in EM. We conclude that the best clustering method can be chosen depending on the aim of the study and the number of clusters. In general, KM is relatively robust in our tests. We also provide recommendations for future application of clustering analyses. This study is helpful to ensure the credibility of the application and interpretation of clustering methods.
文摘This paper focuses on the integration and data transformation between GPS and totalstation.It emphasizes on the way to transfer the WGS84 Cartesian coordinates to the local two_dimensional plane coordinates and the orthometric height GPS receiver,totalstation,radio,notebook computer and the corresponding software work together to form a new surveying system,the super_totalstation positioning system(SPS) and a new surveying model for terrestrial surveying.With the help of this system,the positions of detail points can be measured.
文摘Nowadays the side-looking SAR echo data can be obtained easily from the commercial channel, while that of other SAR imaging modes such as squint, spotlight are difficult to be acquired. This paper presents a new scheme to transform the side-looking returns to squint ones, in a direct and an indirect approach respectively. Direct transformation uses the data with a wide azimuth beam angle. The maximum of the required squint angle is limited under several degrees. Squint data under indirect transformation can be obtained by adding a platform velocity along slant range according to the required squint angles. Then the squint data is determined by the angle between the new forward velocity and line-of-sight direction. This method results in higher squint angle compared with the first one. Verification shows the feasibility of these approaches with illustration of side-looking E-SAR raw data processing. The future work will be on the precise Doppler centroid estimation and effective imaging algorithm development.
基金This research was financially supported by the Ministry of Trade,Industry,and Energy(MOTIE),Korea,under the“Project for Research and Development with Middle Markets Enterprises and DNA(Data,Network,AI)Universities”(AI-based Safety Assessment and Management System for Concrete Structures)(ReferenceNumber P0024559)supervised by theKorea Institute for Advancement of Technology(KIAT).
文摘Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal dependence,and noise.Therefore,methodologies for data augmentation and conversion of time series data into images for analysis have been studied.This paper proposes a fault detection model that uses time series data augmentation and transformation to address the problems of data imbalance,temporal dependence,and robustness to noise.The method of data augmentation is set as the addition of noise.It involves adding Gaussian noise,with the noise level set to 0.002,to maximize the generalization performance of the model.In addition,we use the Markov Transition Field(MTF)method to effectively visualize the dynamic transitions of the data while converting the time series data into images.It enables the identification of patterns in time series data and assists in capturing the sequential dependencies of the data.For anomaly detection,the PatchCore model is applied to show excellent performance,and the detected anomaly areas are represented as heat maps.It allows for the detection of anomalies,and by applying an anomaly map to the original image,it is possible to capture the areas where anomalies occur.The performance evaluation shows that both F1-score and Accuracy are high when time series data is converted to images.Additionally,when processed as images rather than as time series data,there was a significant reduction in both the size of the data and the training time.The proposed method can provide an important springboard for research in the field of anomaly detection using time series data.Besides,it helps solve problems such as analyzing complex patterns in data lightweight.
基金Project supported by the National Natural Science Foundation of China(No.62132017)the Fundamental Research Funds for the Central Universities,China(No.226202200235)。
文摘Automatic visualization generates meaningful visualizations to support data analysis and pattern finding for novice or casual users who are not familiar with visualization design.Current automatic visualization approaches adopt mainly aggregation and filtering to extract patterns from the original data.However,these limited data transformations fail to capture complex patterns such as clusters and correlations.Although recent advances in feature engineering provide the potential for more kinds of automatic data transformations,the auto-generated transformations lack explainability concerning how patterns are connected with the original features.To tackle these challenges,we propose a novel explainable recommendation approach for extended kinds of data transformations in automatic visualization.We summarize the space of feasible data transformations and measures on explainability of transformation operations with a literature review and a pilot study,respectively.A recommendation algorithm is designed to compute optimal transformations,which can reveal specified types of patterns and maintain explainability.We demonstrate the effectiveness of our approach through two cases and a user study.
文摘针对图像识别中获取全局特征的局限性及难以提升识别准确性的问题,提出一种基于随机增强Swin-Tiny Transformer轻量级模型的图像识别方法.该方法在预处理阶段结合基于随机数据增强(random data augmentation based enhancement,RDABE)算法对图像特征进行增强,并采用Transformer的自注意力机制,以获得更全面的高层视觉语义信息.通过在玉米病害数据集上优化Swin-Tiny Transformer模型并进行参数微调,在农业领域的玉米病害上验证了该算法的适用性,实现了更精确的病害检测.实验结果表明,基于随机增强的轻量级Swin-Tiny+RDABE模型对玉米病害图像识别准确率达93.5867%.在参数权重一致,与性能优秀的轻量级Transformer、卷积神经网络(CNN)系列模型对比的实验结果表明,改进的模型准确率比Swin-Tiny Transformer,Deit3_Small,Vit_Small,Mobilenet_V3_Small,ShufflenetV2和Efficientnet_B1_Pruned模型提高了1.1877%~4.9881%,且能迅速收敛.
基金supported by Universiti Putra Malaysia Grant Scheme(Putra Grant)(GP/2020/9692500).
文摘Data transformation is the core process in migrating database from relational database to NoSQL database such as column-oriented database. However,there is no standard guideline for data transformation from relational database toNoSQL database. A number of schema transformation techniques have been proposed to improve data transformation process and resulted better query processingtime when compared to the relational database query processing time. However,these approaches produced redundant tables in the resulted schema that in turnconsume large unnecessary storage size and produce high query processing timedue to the generated schema with redundant column families in the transformedcolumn-oriented database. In this paper, an efficient data transformation techniquefrom relational database to column-oriented database is proposed. The proposedschema transformation technique is based on the combination of denormalizationapproach, data access pattern and multiple-nested schema. In order to validate theproposed work, the proposed technique is implemented by transforming data fromMySQL database to MongoDB database. A benchmark transformation techniqueis also performed in which the query processing time and the storage size arecompared. Based on the experimental results, the proposed transformation technique showed significant improvement in terms query processing time and storagespace usage due to the reduced number of column families in the column-orienteddatabase.
文摘This study presents a comparative analysis of two image enhancement techniques, Continuous Wavelet Transform (CWT) and Fast Fourier Transform (FFT), in the context of improving the clarity of high-quality 3D seismic data obtained from the Tano Basin in West Africa, Ghana. The research focuses on a comparative analysis of image clarity in seismic attribute analysis to facilitate the identification of reservoir features within the subsurface structures. The findings of the study indicate that CWT has a significant advantage over FFT in terms of image quality and identifying subsurface structures. The results demonstrate the superior performance of CWT in providing a better representation, making it more effective for seismic attribute analysis. The study highlights the importance of choosing the appropriate image enhancement technique based on the specific application needs and the broader context of the study. While CWT provides high-quality images and superior performance in identifying subsurface structures, the selection between these methods should be made judiciously, taking into account the objectives of the study and the characteristics of the signals being analyzed. The research provides valuable insights into the decision-making process for selecting image enhancement techniques in seismic data analysis, helping researchers and practitioners make informed choices that cater to the unique requirements of their studies. Ultimately, this study contributes to the advancement of the field of subsurface imaging and geological feature identification.
文摘为解决临床医学量表数据类别不均衡容易对模型产生影响,以及在处理量表数据任务时深度学习框架性能难以媲美传统机器学习方法问题,提出了一种基于级联欠采样的Transformer网络模型(layer by layer Transformer,LLT)。LLT通过级联欠采样方法对多数类数据逐层删减,实现数据类别平衡,降低数据类别不均衡对分类器的影响,并利用注意力机制对输入数据的特征进行相关性评估实现特征选择,细化特征提取能力,改善模型性能。采用类风湿关节炎(RA)数据作为测试样本,实验证明,在不改变样本分布的情况下,提出的级联欠采样方法对少数类别的识别率增加了6.1%,与常用的NEARMISS和ADASYN相比,分别高出1.4%和10.4%;LLT在RA量表数据的准确率和F 1-score指标上达到了72.6%和71.5%,AUC值为0.89,mAP值为0.79,性能超过目前RF、XGBoost和GBDT等主流量表数据分类模型。最后对模型过程进行可视化,分析了影响RA的特征,对RA临床诊断具有较好的指导意义。