With the popularization of the Internet and the development of technology,cyber threats are increasing day by day.Threats such as malware,hacking,and data breaches have had a serious impact on cybersecurity.The networ...With the popularization of the Internet and the development of technology,cyber threats are increasing day by day.Threats such as malware,hacking,and data breaches have had a serious impact on cybersecurity.The network security environment in the era of big data presents the characteristics of large amounts of data,high diversity,and high real-time requirements.Traditional security defense methods and tools have been unable to cope with the complex and changing network security threats.This paper proposes a machine-learning security defense algorithm based on metadata association features.Emphasize control over unauthorized users through privacy,integrity,and availability.The user model is established and the mapping between the user model and the metadata of the data source is generated.By analyzing the user model and its corresponding mapping relationship,the query of the user model can be decomposed into the query of various heterogeneous data sources,and the integration of heterogeneous data sources based on the metadata association characteristics can be realized.Define and classify customer information,automatically identify and perceive sensitive data,build a behavior audit and analysis platform,analyze user behavior trajectories,and complete the construction of a machine learning customer information security defense system.The experimental results show that when the data volume is 5×103 bit,the data storage integrity of the proposed method is 92%.The data accuracy is 98%,and the success rate of data intrusion is only 2.6%.It can be concluded that the data storage method in this paper is safe,the data accuracy is always at a high level,and the data disaster recovery performance is good.This method can effectively resist data intrusion and has high air traffic control security.It can not only detect all viruses in user data storage,but also realize integrated virus processing,and further optimize the security defense effect of user big data.展开更多
In view of the problems of inconsistent data semantics,inconsistent data formats,and difficult data quality assurance between the railway engineering design phase and the construction and operation phase,as well as th...In view of the problems of inconsistent data semantics,inconsistent data formats,and difficult data quality assurance between the railway engineering design phase and the construction and operation phase,as well as the difficulty in fully realizing the value of design results,this paper proposes a design and implementation scheme for a railway engineering collaborative design platform.The railway engineering collaborative design platform mainly includes functional modules such as metadata management,design collaboration,design delivery management,model component library,model rendering services,and Building Information Modeling(BIM)application services.Based on this,research is conducted on multi-disciplinary parameterized collaborative design technology for railway engineering,infrastructure data management and delivery technology,and design multi-source data fusion and application technology.The railway engineering collaborative design platform is compared with other railway design software to further validate its advantages and advanced features.The platform has been widely applied in multiple railway construction projects,greatly improving the design and project management efficiency.展开更多
Purpose: The purpose of the paper is to provide a framework for addressing the disconnect between metadata and data science. Data science cannot progress without metadata research.This paper takes steps toward advanc...Purpose: The purpose of the paper is to provide a framework for addressing the disconnect between metadata and data science. Data science cannot progress without metadata research.This paper takes steps toward advancing the synergy between metadata and data science, and identifies pathways for developing a more cohesive metadata research agenda in data science. Design/methodology/approach: This paper identifies factors that challenge metadata research in the digital ecosystem, defines metadata and data science, and presents the concepts big metadata, smart metadata, and metadata capital as part of a metadata lingua franca connecting to data science. Findings: The "utilitarian nature" and "historical and traditional views" of metadata are identified as two intersecting factors that have inhibited metadata research. Big metadata, smart metadata, and metadata capital are presented as part ofa metadata linguafranca to help frame research in the data science research space. Research limitations: There are additional, intersecting factors to consider that likely inhibit metadata research, and other significant metadata concepts to explore. Practical implications: The immediate contribution of this work is that it may elicit response, critique, revision, or, more significantly, motivate research. The work presented can encourage more researchers to consider the significance of metadata as a research worthy topic within data science and the larger digital ecosystem. Originality/value: Although metadata research has not kept pace with other data science topics, there is little attention directed to this problem. This is surprising, given that metadata is essential for data science endeavors. This examination synthesizes original and prior scholarship to provide new grounding for metadata research in data science.展开更多
Vast amounts of heterogeneous data on marine observations have been accumulated due to the rapid development of ocean observation technology.Several state-of-art methods are proposed to manage the emerging Internet of...Vast amounts of heterogeneous data on marine observations have been accumulated due to the rapid development of ocean observation technology.Several state-of-art methods are proposed to manage the emerging Internet of Things(IoT)sensor data.However,the use of an inefficient data management strategy during the data storage process can lead to missing metadata;thus,part of the sensor data cannot be indexed and utilized(i.e.,‘data swamp’).Researchers have focused on optimizing storage procedures to prevent such disasters,but few have attempted to restore the missing metadata.In this study,we propose an AI-based algorithm to reconstruct the metadata of heterogeneous marine data in data swamps to solve the above problems.First,a MapReduce algorithm is proposed to preprocess raw marine data and extract its feature tensors in parallel.Second,load the feature tensors are loaded into a machine learning algorithm and clustering operation is implemented.The similarities between the incoming data and the trained clustering results in terms of clustering results are also calculated.Finally,metadata reconstruction is performed based on existing marine observa-tion data processing results.The experiments are designed using existing datasets obtained from ocean observing systems,thus verifying the effectiveness of the algorithms.The results demonstrate the excellent performance of our proposed algorithm for the metadata recon-struction of heterogenous marine observation data.展开更多
From the beginning,the process of research and its publication is an ever-growing phenomenon and with the emergence of web technologies,its growth rate is overwhelming.On a rough estimate,more than thirty thousand res...From the beginning,the process of research and its publication is an ever-growing phenomenon and with the emergence of web technologies,its growth rate is overwhelming.On a rough estimate,more than thirty thousand research journals have been issuing around four million papers annually on average.Search engines,indexing services,and digital libraries have been searching for such publications over the web.Nevertheless,getting the most relevant articles against the user requests is yet a fantasy.It is mainly because the articles are not appropriately indexed based on the hierarchies of granular subject classification.To overcome this issue,researchers are striving to investigate new techniques for the classification of the research articles especially,when the complete article text is not available(a case of nonopen access articles).The proposed study aims to investigate the multilabel classification over the available metadata in the best possible way and to assess,“to what extent metadata-based features can perform in contrast to content-based approaches.”In this regard,novel techniques for investigating multilabel classification have been proposed,developed,and evaluated on metadata such as the Title and Keywords of the articles.The proposed technique has been assessed for two diverse datasets,namely,from the Journal of universal computer science(J.UCS)and the benchmark dataset comprises of the articles published by the Association for computing machinery(ACM).The proposed technique yields encouraging results in contrast to the state-ofthe-art techniques in the literature.展开更多
Spammer detection is to identify and block malicious activities performing users.Such users should be identified and terminated from social media to keep the social media process organic and to maintain the integrity ...Spammer detection is to identify and block malicious activities performing users.Such users should be identified and terminated from social media to keep the social media process organic and to maintain the integrity of online social spaces.Previous research aimed to find spammers based on hybrid approaches of graph mining,posted content,and metadata,using small and manually labeled datasets.However,such hybrid approaches are unscalable,not robust,particular dataset dependent,and require numerous parameters,complex graphs,and natural language processing(NLP)resources to make decisions,which makes spammer detection impractical for real-time detection.For example,graph mining requires neighbors’information,posted content-based approaches require multiple tweets from user profiles,then NLP resources to make decisions that are not applicable in a real-time environment.To fill the gap,firstly,we propose a REal-time Metadata based Spammer detection(REMS)model based on only metadata features to identify spammers,which takes the least number of parameters and provides adequate results.REMS is a scalable and robust model that uses only 19 metadata features of Twitter users to induce 73.81%F1-Score classification accuracy using a balanced training dataset(50%spam and 50%genuine users).The 19 features are 8 original and 11 derived features from the original features of Twitter users,identified with extensive experiments and analysis.Secondly,we present the largest and most diverse dataset of published research,comprising 211 K spam users and 1 million genuine users.The diversity of the dataset can be measured as it comprises users who posted 2.1 million Tweets on seven topics(100 hashtags)from 6 different geographical locations.The REMS’s superior classification performance with multiple machine and deep learning methods indicates that only metadata features have the potential to identify spammers rather than focusing on volatile posted content and complex graph structures.Dataset and REMS’s codes are available on GitHub(www.github.com/mhadnanali/REMS).展开更多
metadata是“关于数据的数据”,本文介绍了 m etadata的基本情况 ,并对 HTML 和 XML 环境的几个 m eta-data规范进行了论述 (包括 Dublin core,PICS,Web Collections,CDF ,MCF及 RDF)。由于 metadata在 Internet信息资源的组织和发现方...metadata是“关于数据的数据”,本文介绍了 m etadata的基本情况 ,并对 HTML 和 XML 环境的几个 m eta-data规范进行了论述 (包括 Dublin core,PICS,Web Collections,CDF ,MCF及 RDF)。由于 metadata在 Internet信息资源的组织和发现方面起着非常重要的作用 ,作者呼吁国人应当加强对 metadata的研究。展开更多
基于e交通学的交通大数据系统是通过构建由大型高性能计算机组成的集群系统来处理海量的交通数据的存储以及计算服务,不仅所需的环境十分严格,而且成本高、部署周期长、维护困难;不仅如此,随着数据量的增长,业务复杂度的增加,以及计算...基于e交通学的交通大数据系统是通过构建由大型高性能计算机组成的集群系统来处理海量的交通数据的存储以及计算服务,不仅所需的环境十分严格,而且成本高、部署周期长、维护困难;不仅如此,随着数据量的增长,业务复杂度的增加,以及计算强度的加大,通过增加Server数量来增加其处理对海量交通数据的能力会变的十分困难,甚至需要对集群的结构进行重新的设计和部署,这不仅需要大量的人力成本和财力,而且造成了巨大的浪费。MetaData交换及部署能力成为当今大数据驱动的智能交通系统研究的重点。面对海量交通数据,如何存储、管理、处理和应用MetaData是十分关键的问题。本文提出的交通大数据MetaData交换系统(Traffic Big Data Metadata Exchange System,TBMES)实现分布式交通信息交换与互访。该构架通过实时交通数据与交通信息大数据平台实时对接,让交通信息传递具有连续性、真实性;宏观交通数据和微观交通数据无缝对接,既可分析路网交通运行态势,又可评价重要道路节点的交通效率,全面掌握区域交通运营状态;使得交通组织管理可视化、可量化、系统化、自动化;系统的输出结果,可为决策者提供决策的理论支持,促进交通决策科学化。展开更多
基金This work was supported by the National Natural Science Foundation of China(U2133208,U20A20161).
文摘With the popularization of the Internet and the development of technology,cyber threats are increasing day by day.Threats such as malware,hacking,and data breaches have had a serious impact on cybersecurity.The network security environment in the era of big data presents the characteristics of large amounts of data,high diversity,and high real-time requirements.Traditional security defense methods and tools have been unable to cope with the complex and changing network security threats.This paper proposes a machine-learning security defense algorithm based on metadata association features.Emphasize control over unauthorized users through privacy,integrity,and availability.The user model is established and the mapping between the user model and the metadata of the data source is generated.By analyzing the user model and its corresponding mapping relationship,the query of the user model can be decomposed into the query of various heterogeneous data sources,and the integration of heterogeneous data sources based on the metadata association characteristics can be realized.Define and classify customer information,automatically identify and perceive sensitive data,build a behavior audit and analysis platform,analyze user behavior trajectories,and complete the construction of a machine learning customer information security defense system.The experimental results show that when the data volume is 5×103 bit,the data storage integrity of the proposed method is 92%.The data accuracy is 98%,and the success rate of data intrusion is only 2.6%.It can be concluded that the data storage method in this paper is safe,the data accuracy is always at a high level,and the data disaster recovery performance is good.This method can effectively resist data intrusion and has high air traffic control security.It can not only detect all viruses in user data storage,but also realize integrated virus processing,and further optimize the security defense effect of user big data.
基金supported by the National Key Research and Development Program of China(2021YFB2600405).
文摘In view of the problems of inconsistent data semantics,inconsistent data formats,and difficult data quality assurance between the railway engineering design phase and the construction and operation phase,as well as the difficulty in fully realizing the value of design results,this paper proposes a design and implementation scheme for a railway engineering collaborative design platform.The railway engineering collaborative design platform mainly includes functional modules such as metadata management,design collaboration,design delivery management,model component library,model rendering services,and Building Information Modeling(BIM)application services.Based on this,research is conducted on multi-disciplinary parameterized collaborative design technology for railway engineering,infrastructure data management and delivery technology,and design multi-source data fusion and application technology.The railway engineering collaborative design platform is compared with other railway design software to further validate its advantages and advanced features.The platform has been widely applied in multiple railway construction projects,greatly improving the design and project management efficiency.
文摘Purpose: The purpose of the paper is to provide a framework for addressing the disconnect between metadata and data science. Data science cannot progress without metadata research.This paper takes steps toward advancing the synergy between metadata and data science, and identifies pathways for developing a more cohesive metadata research agenda in data science. Design/methodology/approach: This paper identifies factors that challenge metadata research in the digital ecosystem, defines metadata and data science, and presents the concepts big metadata, smart metadata, and metadata capital as part of a metadata lingua franca connecting to data science. Findings: The "utilitarian nature" and "historical and traditional views" of metadata are identified as two intersecting factors that have inhibited metadata research. Big metadata, smart metadata, and metadata capital are presented as part ofa metadata linguafranca to help frame research in the data science research space. Research limitations: There are additional, intersecting factors to consider that likely inhibit metadata research, and other significant metadata concepts to explore. Practical implications: The immediate contribution of this work is that it may elicit response, critique, revision, or, more significantly, motivate research. The work presented can encourage more researchers to consider the significance of metadata as a research worthy topic within data science and the larger digital ecosystem. Originality/value: Although metadata research has not kept pace with other data science topics, there is little attention directed to this problem. This is surprising, given that metadata is essential for data science endeavors. This examination synthesizes original and prior scholarship to provide new grounding for metadata research in data science.
基金supported by the Shandong Province Natural Science Foundation(No.ZR2020QF028).
文摘Vast amounts of heterogeneous data on marine observations have been accumulated due to the rapid development of ocean observation technology.Several state-of-art methods are proposed to manage the emerging Internet of Things(IoT)sensor data.However,the use of an inefficient data management strategy during the data storage process can lead to missing metadata;thus,part of the sensor data cannot be indexed and utilized(i.e.,‘data swamp’).Researchers have focused on optimizing storage procedures to prevent such disasters,but few have attempted to restore the missing metadata.In this study,we propose an AI-based algorithm to reconstruct the metadata of heterogeneous marine data in data swamps to solve the above problems.First,a MapReduce algorithm is proposed to preprocess raw marine data and extract its feature tensors in parallel.Second,load the feature tensors are loaded into a machine learning algorithm and clustering operation is implemented.The similarities between the incoming data and the trained clustering results in terms of clustering results are also calculated.Finally,metadata reconstruction is performed based on existing marine observa-tion data processing results.The experiments are designed using existing datasets obtained from ocean observing systems,thus verifying the effectiveness of the algorithms.The results demonstrate the excellent performance of our proposed algorithm for the metadata recon-struction of heterogenous marine observation data.
文摘From the beginning,the process of research and its publication is an ever-growing phenomenon and with the emergence of web technologies,its growth rate is overwhelming.On a rough estimate,more than thirty thousand research journals have been issuing around four million papers annually on average.Search engines,indexing services,and digital libraries have been searching for such publications over the web.Nevertheless,getting the most relevant articles against the user requests is yet a fantasy.It is mainly because the articles are not appropriately indexed based on the hierarchies of granular subject classification.To overcome this issue,researchers are striving to investigate new techniques for the classification of the research articles especially,when the complete article text is not available(a case of nonopen access articles).The proposed study aims to investigate the multilabel classification over the available metadata in the best possible way and to assess,“to what extent metadata-based features can perform in contrast to content-based approaches.”In this regard,novel techniques for investigating multilabel classification have been proposed,developed,and evaluated on metadata such as the Title and Keywords of the articles.The proposed technique has been assessed for two diverse datasets,namely,from the Journal of universal computer science(J.UCS)and the benchmark dataset comprises of the articles published by the Association for computing machinery(ACM).The proposed technique yields encouraging results in contrast to the state-ofthe-art techniques in the literature.
基金supported by the Guangzhou Government Project(Grant No.62216235)the National Natural Science Foundation of China(Grant Nos.61573328,622260-1).
文摘Spammer detection is to identify and block malicious activities performing users.Such users should be identified and terminated from social media to keep the social media process organic and to maintain the integrity of online social spaces.Previous research aimed to find spammers based on hybrid approaches of graph mining,posted content,and metadata,using small and manually labeled datasets.However,such hybrid approaches are unscalable,not robust,particular dataset dependent,and require numerous parameters,complex graphs,and natural language processing(NLP)resources to make decisions,which makes spammer detection impractical for real-time detection.For example,graph mining requires neighbors’information,posted content-based approaches require multiple tweets from user profiles,then NLP resources to make decisions that are not applicable in a real-time environment.To fill the gap,firstly,we propose a REal-time Metadata based Spammer detection(REMS)model based on only metadata features to identify spammers,which takes the least number of parameters and provides adequate results.REMS is a scalable and robust model that uses only 19 metadata features of Twitter users to induce 73.81%F1-Score classification accuracy using a balanced training dataset(50%spam and 50%genuine users).The 19 features are 8 original and 11 derived features from the original features of Twitter users,identified with extensive experiments and analysis.Secondly,we present the largest and most diverse dataset of published research,comprising 211 K spam users and 1 million genuine users.The diversity of the dataset can be measured as it comprises users who posted 2.1 million Tweets on seven topics(100 hashtags)from 6 different geographical locations.The REMS’s superior classification performance with multiple machine and deep learning methods indicates that only metadata features have the potential to identify spammers rather than focusing on volatile posted content and complex graph structures.Dataset and REMS’s codes are available on GitHub(www.github.com/mhadnanali/REMS).
文摘metadata是“关于数据的数据”,本文介绍了 m etadata的基本情况 ,并对 HTML 和 XML 环境的几个 m eta-data规范进行了论述 (包括 Dublin core,PICS,Web Collections,CDF ,MCF及 RDF)。由于 metadata在 Internet信息资源的组织和发现方面起着非常重要的作用 ,作者呼吁国人应当加强对 metadata的研究。
文摘基于e交通学的交通大数据系统是通过构建由大型高性能计算机组成的集群系统来处理海量的交通数据的存储以及计算服务,不仅所需的环境十分严格,而且成本高、部署周期长、维护困难;不仅如此,随着数据量的增长,业务复杂度的增加,以及计算强度的加大,通过增加Server数量来增加其处理对海量交通数据的能力会变的十分困难,甚至需要对集群的结构进行重新的设计和部署,这不仅需要大量的人力成本和财力,而且造成了巨大的浪费。MetaData交换及部署能力成为当今大数据驱动的智能交通系统研究的重点。面对海量交通数据,如何存储、管理、处理和应用MetaData是十分关键的问题。本文提出的交通大数据MetaData交换系统(Traffic Big Data Metadata Exchange System,TBMES)实现分布式交通信息交换与互访。该构架通过实时交通数据与交通信息大数据平台实时对接,让交通信息传递具有连续性、真实性;宏观交通数据和微观交通数据无缝对接,既可分析路网交通运行态势,又可评价重要道路节点的交通效率,全面掌握区域交通运营状态;使得交通组织管理可视化、可量化、系统化、自动化;系统的输出结果,可为决策者提供决策的理论支持,促进交通决策科学化。