The paper presents a set of techniques of digital watermarking by which copyright and user rights messages are hidden into geo-spatial graphics data,as well as techniques of compressing and encrypting the watermarked ...The paper presents a set of techniques of digital watermarking by which copyright and user rights messages are hidden into geo-spatial graphics data,as well as techniques of compressing and encrypting the watermarked geo-spatial graphics data.The technology aims at tracing and resisting the illegal distribution and duplication of the geo-spatial graphics data product,so as to effectively protect the data producer's rights as well as to facilitate the secure sharing of geo-spatial graphics data.So far in the CIS field throughout the world,few researches have been made on digital watermarking.The research is a novel exploration both in the field of security management of geo-spatial graphics data and in the applications of digital watermarking technique.An application software employing the proposed technology has been developed.A number of experimental tests on the 1:500,000 digital bathymetric chart of the South China Sea and 1:10,000 digital topographic map of Jiangsu Province have been conducted to verify the feasibility of the proposed technology.展开更多
Data-Base Management System (DBMS) is the current standard for storing information. A DBMS organizes and maintains a structure of storage of data. Databases make it possible to store vast amounts of randomly created i...Data-Base Management System (DBMS) is the current standard for storing information. A DBMS organizes and maintains a structure of storage of data. Databases make it possible to store vast amounts of randomly created information and then retrieve items using associative reasoning in search routines. However, design of databases is cumbersome. If one is to use a database primarily to directly input information, each field must be predefined manually, and the fields must be organized to permit coherent data input. This static requirement is problematic and requires that database table(s) be predefined and customized at the outset, a difficult proposition since current DBMS lack a user friendly front end to allow flexible design of the input model. Furthermore, databases are primarily text based, making it difficult to process graphical data. We have developed a general and nonproprietary approach to the problem of input modeling designed to make use of the known informational architecture to map data to a database and then retrieve the original document in freely editable form. We create form templates using ordinary word processing software: Microsoft InfoPath 2007. Each field in the form is given a unique name identifier in order to be distinguished in the database. It is possible to export text based documents created initially in Microsoft Word by placing a colon at the beginning of any desired field location. InfoPath then captures the preceding string and uses it as the label for the field. Each form can be structured in a way to include any combination of both textual and graphical fields. We input data into InfoPath templates. We then submit the data through a web service to populate fields in an SQL database. By appropriate indexing, we can then recall the entire document from the SQL database for editing, with corresponding audit trail. Graphical data is handled no differently than textual data and is embedded in the database itself permitting direct query approaches. This technique makes it possible for general users to benefit from a combined text-graphical database environment with a flexible non-proprietary interface. Consequently, any template can be effortlessly transformed to a database system and easily recovered in a narrative form.展开更多
Mitigating increasing cyberattack incidents may require strategies such as reinforcing organizations’ networks with Honeypots and effectively analyzing attack traffic for detection of zero-day attacks and vulnerabili...Mitigating increasing cyberattack incidents may require strategies such as reinforcing organizations’ networks with Honeypots and effectively analyzing attack traffic for detection of zero-day attacks and vulnerabilities. To effectively detect and mitigate cyberattacks, both computerized and visual analyses are typically required. However, most security analysts are not adequately trained in visualization principles and/or methods, which is required for effective visual perception of useful attack information hidden in attack data. Additionally, Honeypot has proven useful in cyberattack research, but no studies have comprehensively investigated visualization practices in the field. In this paper, we reviewed visualization practices and methods commonly used in the discovery and communication of attack patterns based on Honeypot network traffic data. Using the PRISMA methodology, we identified and screened 218 papers and evaluated only 37 papers having a high impact. Most Honeypot papers conducted summary statistics of Honeypot data based on static data metrics such as IP address, port, and packet size. They visually analyzed Honeypot attack data using simple graphical methods (such as line, bar, and pie charts) that tend to hide useful attack information. Furthermore, only a few papers conducted extended attack analysis, and commonly visualized attack data using scatter and linear plots. Papers rarely included simple yet sophisticated graphical methods, such as box plots and histograms, which allow for critical evaluation of analysis results. While a significant number of automated visualization tools have incorporated visualization standards by default, the construction of effective and expressive graphical methods for easy pattern discovery and explainable insights still requires applied knowledge and skill of visualization principles and tools, and occasionally, an interdisciplinary collaboration with peers. We, therefore, suggest the need, going forward, for non-classical graphical methods for visualizing attack patterns and communicating analysis results. We also recommend training investigators in visualization principles and standards for effective visual perception and presentation.展开更多
A new X-ray absorption fine structure(XAFS)data-collection system based on the Experimental Physics and Industrial Control System software environment has been established at the BL14W1 beamline of the Shanghai Synchr...A new X-ray absorption fine structure(XAFS)data-collection system based on the Experimental Physics and Industrial Control System software environment has been established at the BL14W1 beamline of the Shanghai Synchrotron Radiation Facility. The system provides for automatic sequential analysis of multiple samples for continuous high-throughput(HT) measurements. Specifically, 8 sample pellets are loaded into an alumina holder,and a high-precision two-dimensional translation stage is programmed to switch these samples automatically for collecting the XAFS spectrum of each sample in sequence.Experimenters implement HT measurements via a graphical user interface developed with Control System Studio.Finally, the successful operation of the HT XAFS system is demonstrated by running experiments on two groups of copper–ceria catalysts, each of which contains 8 different powder samples.展开更多
This topic is based on AutoCAD graphic input environment. It takes the STL format graphic files as the data exchange interface to do the research of STL interactive 3D realistic graphics displaying content. Through th...This topic is based on AutoCAD graphic input environment. It takes the STL format graphic files as the data exchange interface to do the research of STL interactive 3D realistic graphics displaying content. Through the analysis of the STL file format, in VC++6.0 programming environment, class if stream in the standard IO library was used for object class definition. And get line functions in the string class were called to read and bind STL file line by line. In data processing module, draw triangles in OpenGL programming technology was applied to realize the visual display of the STL graphics, with the corresponding 3D entity data generated. OpenGL graphics processing technologies were applied to display 3D graphics from STL files input or realistic program module, which contain the graphic transformation, light, materials, et al. Test reports were shared based on the test of the application system. Finally the program design of STL graphics realistic display system was completed based on research work, which has certain theoretical and practical significance in the aspect of engineering application.展开更多
In the technique of video multi-target tracking,the common particle filter can not deal well with uncertain relations among multiple targets.To solve this problem,many researchers use data association method to reduce...In the technique of video multi-target tracking,the common particle filter can not deal well with uncertain relations among multiple targets.To solve this problem,many researchers use data association method to reduce the multi-target uncertainty.However,the traditional data association method is difficult to track accurately when the target is occluded.To remove the occlusion in the video,combined with the theory of data association,this paper adopts the probabilistic graphical model for multi-target modeling and analysis of the targets relationship in the particle filter framework.Ex-perimental results show that the proposed algorithm can solve the occlusion problem better compared with the traditional algorithm.展开更多
For virtually realizing the graphic realism display of DXF machine parts, in AutoCAD2007 graphic drawing environment, an interactive experimental method was taken to realize the display of graphic in DXF, which was ta...For virtually realizing the graphic realism display of DXF machine parts, in AutoCAD2007 graphic drawing environment, an interactive experimental method was taken to realize the display of graphic in DXF, which was taken as the data-exchanged interface and source. Based on depth analysis of DXF data structure, take one drawing of DXF lathe turning rotational part asthe test piece. By VC++6.0 programming, part's geometry information could be obtained. Through data processing, 3D data of the test piece could be generated, which is based on 2D data of DXF test piece. Then, OpenGL graphic processing technologies (light, material, texture, map, et al.) were applied on the 3D display of test piece from DXF files or program modules. Finally based on the test report, results of the system functions were shared to prove the realization of system design, and the feasibility of algorithms used. In the developed software, Machine Designers could get a full view of machine parts, and do some proper modifications. The study content and results of our work have some theory and practical significance on the application of program design in the practical projects.展开更多
The cross-gradients joint inversion technique has been applied to multiple geophysical data with a significant improvement on compatibility, but its numerical implementation for practical use is rarely discussed in th...The cross-gradients joint inversion technique has been applied to multiple geophysical data with a significant improvement on compatibility, but its numerical implementation for practical use is rarely discussed in the literature. We present a MATLAB-based three-dimensional cross-gradients joint inversion program with application to gravity and magnetic data. The input and output information was examined with care to create a rational, independent design of a graphical user interface (GUI) and computing kernel. For 3D visualization and data file operations, UBC-GIF tools are invoked using a series of I/O functions. Some key issues regarding the iterative joint inversion algorithm are also discussed: for instance, the forward difference of cross gradients, and matrix pseudo inverse computation. A synthetic example is employed to illustrate the whole process. Joint and separate inversions can be performed flexibly by switching the inversion mode. The resulting density model and susceptibility model demonstrate the correctness of the proposed program.展开更多
Oil-gas remote sensing information is obtained from satellite TM data through graph-ic treatment in the light of the hydrocarbon-microseepage theory. The nine target areas (ofthree types) selected on this basis coinci...Oil-gas remote sensing information is obtained from satellite TM data through graph-ic treatment in the light of the hydrocarbon-microseepage theory. The nine target areas (ofthree types) selected on this basis coincide well with the occurrence of natural gases and have been proved by subsequent prospecting. Plants in the target areas are characterized, as a result of hydrocarbon- microseepage, by abnormal spectral features with the absorption peaks of chlorophyll shifting toward blue light, reflectivity in the range of visible light increasing and re-flectivity in the near infrared region decreasing.展开更多
In this paper, we present the virtual desktop that uses a novel methodology and related metrics to benchmark thin client based on Data Delivery Networks (DDN) in terms of scalability and reliability. Most studies of t...In this paper, we present the virtual desktop that uses a novel methodology and related metrics to benchmark thin client based on Data Delivery Networks (DDN) in terms of scalability and reliability. Most studies of the wireless networks mainly focus on system performance and power consumption circuit system;the main target has been separated in terms of Data operation and GUI operation by DDN. The communication protocol for wireless communication may play a major role in energy consumption and other important factors. The portable devices like Personal Digital Assistance (PDA) and others are mainly focusing on the efficient energy consumption (power control) in wireless networks. Here we focus on energy efficiency, algorithmic efficiency, virtualization and resource allocation;these are the main aims of the authors. The foremost research in the direction of wireless computing in saving energy and reducing carbon foot prints is also the challenging part. This is the study proof of brief account of wireless networks.展开更多
蛋白质是一种具有空间结构的物质。蛋白质结构预测的主要目标是从已有的大规模的蛋白质数据集中提取有效的信息,从而预测自然界中蛋白质的结构。目前蛋白质结构预测实验存在的一个问题是,缺少能够进一步反映出蛋白质空间结构特征的数据...蛋白质是一种具有空间结构的物质。蛋白质结构预测的主要目标是从已有的大规模的蛋白质数据集中提取有效的信息,从而预测自然界中蛋白质的结构。目前蛋白质结构预测实验存在的一个问题是,缺少能够进一步反映出蛋白质空间结构特征的数据集。当前主流的PDB蛋白质数据集虽然是经过实验测得,但没有利用到蛋白质的空间特征,而且存在掺杂核酸数据和部分数据不完整的问题。针对以上问题,从蛋白质的空间结构角度来研究蛋白质的预测。在原始PDB数据集的基础上,提出了河海图结构蛋白质数据集(Hohai Graphic Protein Data Bank,HohaiGPDB)。该数据集以图结构为基础,表达出了蛋白质的空间结构特征。基于传统Transformer网络模型对新的数据集进行了相关的蛋白质结构预测实验,在HohaiGPDB数据集上的预测准确率可以达到59.38%,证明了HohaiGPDB数据集的研究价值。HohaiGPDB数据集可以作为蛋白质相关研究的通用数据集。展开更多
基金Under the auspices of Jiangsu Provincial Science and Technology Fundation of Surveying and Mapping (No. 200416 )
文摘The paper presents a set of techniques of digital watermarking by which copyright and user rights messages are hidden into geo-spatial graphics data,as well as techniques of compressing and encrypting the watermarked geo-spatial graphics data.The technology aims at tracing and resisting the illegal distribution and duplication of the geo-spatial graphics data product,so as to effectively protect the data producer's rights as well as to facilitate the secure sharing of geo-spatial graphics data.So far in the CIS field throughout the world,few researches have been made on digital watermarking.The research is a novel exploration both in the field of security management of geo-spatial graphics data and in the applications of digital watermarking technique.An application software employing the proposed technology has been developed.A number of experimental tests on the 1:500,000 digital bathymetric chart of the South China Sea and 1:10,000 digital topographic map of Jiangsu Province have been conducted to verify the feasibility of the proposed technology.
文摘Data-Base Management System (DBMS) is the current standard for storing information. A DBMS organizes and maintains a structure of storage of data. Databases make it possible to store vast amounts of randomly created information and then retrieve items using associative reasoning in search routines. However, design of databases is cumbersome. If one is to use a database primarily to directly input information, each field must be predefined manually, and the fields must be organized to permit coherent data input. This static requirement is problematic and requires that database table(s) be predefined and customized at the outset, a difficult proposition since current DBMS lack a user friendly front end to allow flexible design of the input model. Furthermore, databases are primarily text based, making it difficult to process graphical data. We have developed a general and nonproprietary approach to the problem of input modeling designed to make use of the known informational architecture to map data to a database and then retrieve the original document in freely editable form. We create form templates using ordinary word processing software: Microsoft InfoPath 2007. Each field in the form is given a unique name identifier in order to be distinguished in the database. It is possible to export text based documents created initially in Microsoft Word by placing a colon at the beginning of any desired field location. InfoPath then captures the preceding string and uses it as the label for the field. Each form can be structured in a way to include any combination of both textual and graphical fields. We input data into InfoPath templates. We then submit the data through a web service to populate fields in an SQL database. By appropriate indexing, we can then recall the entire document from the SQL database for editing, with corresponding audit trail. Graphical data is handled no differently than textual data and is embedded in the database itself permitting direct query approaches. This technique makes it possible for general users to benefit from a combined text-graphical database environment with a flexible non-proprietary interface. Consequently, any template can be effortlessly transformed to a database system and easily recovered in a narrative form.
文摘Mitigating increasing cyberattack incidents may require strategies such as reinforcing organizations’ networks with Honeypots and effectively analyzing attack traffic for detection of zero-day attacks and vulnerabilities. To effectively detect and mitigate cyberattacks, both computerized and visual analyses are typically required. However, most security analysts are not adequately trained in visualization principles and/or methods, which is required for effective visual perception of useful attack information hidden in attack data. Additionally, Honeypot has proven useful in cyberattack research, but no studies have comprehensively investigated visualization practices in the field. In this paper, we reviewed visualization practices and methods commonly used in the discovery and communication of attack patterns based on Honeypot network traffic data. Using the PRISMA methodology, we identified and screened 218 papers and evaluated only 37 papers having a high impact. Most Honeypot papers conducted summary statistics of Honeypot data based on static data metrics such as IP address, port, and packet size. They visually analyzed Honeypot attack data using simple graphical methods (such as line, bar, and pie charts) that tend to hide useful attack information. Furthermore, only a few papers conducted extended attack analysis, and commonly visualized attack data using scatter and linear plots. Papers rarely included simple yet sophisticated graphical methods, such as box plots and histograms, which allow for critical evaluation of analysis results. While a significant number of automated visualization tools have incorporated visualization standards by default, the construction of effective and expressive graphical methods for easy pattern discovery and explainable insights still requires applied knowledge and skill of visualization principles and tools, and occasionally, an interdisciplinary collaboration with peers. We, therefore, suggest the need, going forward, for non-classical graphical methods for visualizing attack patterns and communicating analysis results. We also recommend training investigators in visualization principles and standards for effective visual perception and presentation.
基金supported by the National Natural Science Foundation of China under Grant No.21373259
文摘A new X-ray absorption fine structure(XAFS)data-collection system based on the Experimental Physics and Industrial Control System software environment has been established at the BL14W1 beamline of the Shanghai Synchrotron Radiation Facility. The system provides for automatic sequential analysis of multiple samples for continuous high-throughput(HT) measurements. Specifically, 8 sample pellets are loaded into an alumina holder,and a high-precision two-dimensional translation stage is programmed to switch these samples automatically for collecting the XAFS spectrum of each sample in sequence.Experimenters implement HT measurements via a graphical user interface developed with Control System Studio.Finally, the successful operation of the HT XAFS system is demonstrated by running experiments on two groups of copper–ceria catalysts, each of which contains 8 different powder samples.
文摘This topic is based on AutoCAD graphic input environment. It takes the STL format graphic files as the data exchange interface to do the research of STL interactive 3D realistic graphics displaying content. Through the analysis of the STL file format, in VC++6.0 programming environment, class if stream in the standard IO library was used for object class definition. And get line functions in the string class were called to read and bind STL file line by line. In data processing module, draw triangles in OpenGL programming technology was applied to realize the visual display of the STL graphics, with the corresponding 3D entity data generated. OpenGL graphics processing technologies were applied to display 3D graphics from STL files input or realistic program module, which contain the graphic transformation, light, materials, et al. Test reports were shared based on the test of the application system. Finally the program design of STL graphics realistic display system was completed based on research work, which has certain theoretical and practical significance in the aspect of engineering application.
基金Supported by the National High Technology Research and Development Program of China (No. 2007AA11Z227)the Natural Science Foundation of Jiangsu Province of China(No. BK2009352)the Fundamental Research Funds for the Central Universities of China (No. 2010B16414)
文摘In the technique of video multi-target tracking,the common particle filter can not deal well with uncertain relations among multiple targets.To solve this problem,many researchers use data association method to reduce the multi-target uncertainty.However,the traditional data association method is difficult to track accurately when the target is occluded.To remove the occlusion in the video,combined with the theory of data association,this paper adopts the probabilistic graphical model for multi-target modeling and analysis of the targets relationship in the particle filter framework.Ex-perimental results show that the proposed algorithm can solve the occlusion problem better compared with the traditional algorithm.
文摘For virtually realizing the graphic realism display of DXF machine parts, in AutoCAD2007 graphic drawing environment, an interactive experimental method was taken to realize the display of graphic in DXF, which was taken as the data-exchanged interface and source. Based on depth analysis of DXF data structure, take one drawing of DXF lathe turning rotational part asthe test piece. By VC++6.0 programming, part's geometry information could be obtained. Through data processing, 3D data of the test piece could be generated, which is based on 2D data of DXF test piece. Then, OpenGL graphic processing technologies (light, material, texture, map, et al.) were applied on the 3D display of test piece from DXF files or program modules. Finally based on the test report, results of the system functions were shared to prove the realization of system design, and the feasibility of algorithms used. In the developed software, Machine Designers could get a full view of machine parts, and do some proper modifications. The study content and results of our work have some theory and practical significance on the application of program design in the practical projects.
文摘The cross-gradients joint inversion technique has been applied to multiple geophysical data with a significant improvement on compatibility, but its numerical implementation for practical use is rarely discussed in the literature. We present a MATLAB-based three-dimensional cross-gradients joint inversion program with application to gravity and magnetic data. The input and output information was examined with care to create a rational, independent design of a graphical user interface (GUI) and computing kernel. For 3D visualization and data file operations, UBC-GIF tools are invoked using a series of I/O functions. Some key issues regarding the iterative joint inversion algorithm are also discussed: for instance, the forward difference of cross gradients, and matrix pseudo inverse computation. A synthetic example is employed to illustrate the whole process. Joint and separate inversions can be performed flexibly by switching the inversion mode. The resulting density model and susceptibility model demonstrate the correctness of the proposed program.
文摘Oil-gas remote sensing information is obtained from satellite TM data through graph-ic treatment in the light of the hydrocarbon-microseepage theory. The nine target areas (ofthree types) selected on this basis coincide well with the occurrence of natural gases and have been proved by subsequent prospecting. Plants in the target areas are characterized, as a result of hydrocarbon- microseepage, by abnormal spectral features with the absorption peaks of chlorophyll shifting toward blue light, reflectivity in the range of visible light increasing and re-flectivity in the near infrared region decreasing.
文摘In this paper, we present the virtual desktop that uses a novel methodology and related metrics to benchmark thin client based on Data Delivery Networks (DDN) in terms of scalability and reliability. Most studies of the wireless networks mainly focus on system performance and power consumption circuit system;the main target has been separated in terms of Data operation and GUI operation by DDN. The communication protocol for wireless communication may play a major role in energy consumption and other important factors. The portable devices like Personal Digital Assistance (PDA) and others are mainly focusing on the efficient energy consumption (power control) in wireless networks. Here we focus on energy efficiency, algorithmic efficiency, virtualization and resource allocation;these are the main aims of the authors. The foremost research in the direction of wireless computing in saving energy and reducing carbon foot prints is also the challenging part. This is the study proof of brief account of wireless networks.
文摘蛋白质是一种具有空间结构的物质。蛋白质结构预测的主要目标是从已有的大规模的蛋白质数据集中提取有效的信息,从而预测自然界中蛋白质的结构。目前蛋白质结构预测实验存在的一个问题是,缺少能够进一步反映出蛋白质空间结构特征的数据集。当前主流的PDB蛋白质数据集虽然是经过实验测得,但没有利用到蛋白质的空间特征,而且存在掺杂核酸数据和部分数据不完整的问题。针对以上问题,从蛋白质的空间结构角度来研究蛋白质的预测。在原始PDB数据集的基础上,提出了河海图结构蛋白质数据集(Hohai Graphic Protein Data Bank,HohaiGPDB)。该数据集以图结构为基础,表达出了蛋白质的空间结构特征。基于传统Transformer网络模型对新的数据集进行了相关的蛋白质结构预测实验,在HohaiGPDB数据集上的预测准确率可以达到59.38%,证明了HohaiGPDB数据集的研究价值。HohaiGPDB数据集可以作为蛋白质相关研究的通用数据集。