Data protection in databases is critical for any organization,as unauthorized access or manipulation can have severe negative consequences.Intrusion detection systems are essential for keeping databases secure.Advance...Data protection in databases is critical for any organization,as unauthorized access or manipulation can have severe negative consequences.Intrusion detection systems are essential for keeping databases secure.Advancements in technology will lead to significant changes in the medical field,improving healthcare services through real-time information sharing.However,reliability and consistency still need to be solved.Safeguards against cyber-attacks are necessary due to the risk of unauthorized access to sensitive information and potential data corruption.Dis-ruptions to data items can propagate throughout the database,making it crucial to reverse fraudulent transactions without delay,especially in the healthcare industry,where real-time data access is vital.This research presents a role-based access control architecture for an anomaly detection technique.Additionally,the Structured Query Language(SQL)queries are stored in a new data structure called Pentaplet.These pentaplets allow us to maintain the correlation between SQL statements within the same transaction by employing the transaction-log entry information,thereby increasing detection accuracy,particularly for individuals within the company exhibiting unusual behavior.To identify anomalous queries,this system employs a supervised machine learning technique called Support Vector Machine(SVM).According to experimental findings,the proposed model performed well in terms of detection accuracy,achieving 99.92%through SVM with One Hot Encoding and Principal Component Analysis(PCA).展开更多
To capitalize on the primary role of major course teaching and to facilitate students’understanding of abstract concepts in the data structure course,it is essential to increase their interest in learning and develop...To capitalize on the primary role of major course teaching and to facilitate students’understanding of abstract concepts in the data structure course,it is essential to increase their interest in learning and develop case studies that highlight fine traditional culture.By incorporating these culture-rich case studies into classroom instruction,we employ a project-driven teaching approach.This not only allows students to master professional knowledge,but also enhances their abilities to solve specific engineering problems,ultimately fostering cultural confidence.Over the past few years,during which educational reforms have been conducted for trial runs,the feasibility and effectiveness of these reform schemes have been demonstrated.展开更多
"Data Structure and Algorithm",which is an important major subject in computer science,has a lot of problems in teaching activity.This paper introduces and analyzes the situation and problems in this course ..."Data Structure and Algorithm",which is an important major subject in computer science,has a lot of problems in teaching activity.This paper introduces and analyzes the situation and problems in this course study.A "programming factory" method is then brought out which is indeed a practice-oriented platform of the teachingstudy process.Good results are obtained by this creative method.展开更多
For the course of“Data Structures”,this paper introduces the importance and existing problems of the data structure course.Through the literature and the current teaching in major universities,the existing teaching ...For the course of“Data Structures”,this paper introduces the importance and existing problems of the data structure course.Through the literature and the current teaching in major universities,the existing teaching methods and their disadvantages are analyzed.The authors put forward teaching reform suggestions and designed an online course platform.展开更多
In the wake of the research community gaining deep understanding about control-hijacking attacks,data-oriented attacks have emerged.Among data-oriented attacks,data structure manipulation attack(DSMA)is a major catego...In the wake of the research community gaining deep understanding about control-hijacking attacks,data-oriented attacks have emerged.Among data-oriented attacks,data structure manipulation attack(DSMA)is a major category.Pioneering research was conducted and shows that DSMA is able to circumvent the most effective defenses against control-hijacking attacks-DEP,ASLR and CFI.Up to this day,only two defense techniques have demonstrated their effectiveness:Data Flow Integrity(DFI)and Data Structure Layout Randomization(DSLR).However,DFI has high performance overhead,and dynamic DSLR has two main limitations.L-1:Randomizing a large set of data structures will significantly affect the performance.L-2:To be practical,only a fixed sub-set of data structures are randomized.In the case that the data structures targeted by an attack are not covered,dynamic DSLR is essentially noneffective.To address these two limitations,we propose a novel technique,feedback-control-based adaptive DSLR and build a system named SALADSPlus.SALADSPlus seeks to optimize the trade-off between security and cost through feedback control.Using a novel feedback-control-based adaptive algorithm extended from the Upper Confidence Bound(UCB)algorithm,the defender(controller)uses the feedbacks(cost-effectiveness)from previous randomization cycles to adaptively choose the set of data structures to randomize(the next action).Different from dynamic DSLR,the set of randomized data structures are adaptively changed based on the feedbacks.To obtain the feedbacks,SALADSPlus inserts canary in each data structure at the time of compilation.We have implemented SALADSPlus based on gcc-4.5.0.Experimental results show that the runtime overheads are 1.8%,3.7%,and 5.3% when the randomization cycles are selected as 10s,5s,and 1s respectively.展开更多
For storing and modeling three-dimensional(3D)topographic objects(e.g.buildings,roads,dykes,and the terrain),tetrahedralizations have been proposed as an alternative to boundary representations.While in theory they ha...For storing and modeling three-dimensional(3D)topographic objects(e.g.buildings,roads,dykes,and the terrain),tetrahedralizations have been proposed as an alternative to boundary representations.While in theory they have several advantages,current implementations are either not space efficient or do not store topological relationships(which makes spatial analysis and updating slow,or require the use of an expensive 3D spatial index).We discuss in this paper an alternative data structure for storing tetrahedralizations in a database management system(DBMS).It is based on the idea of storing only the vertices and stars of edges;triangles and tetrahedra are represented implicitly.It has been used previously in main memory,but not in a DBMS.We describe how to modify it to obtain an efficient implementation in a DBMS,and we describe how it can be used for modeling 3D topography.As we demonstrate with different real-world examples,the structure is compacter than known alternatives,it permits us to store attributes for any primitives,and has the added benefit of being topological,which permits us to query it efficiently.The structure can be easily implemented in most DBMS(we describe our implementation in PostgreSQL),and we present some of the engineering choices we made for the implementation.展开更多
3D city models are widely used in many disciplines and applications,such as urban planning,disaster management,and environmental simulation.Usually,the terrain and embedded objects like buildings are taken into consid...3D city models are widely used in many disciplines and applications,such as urban planning,disaster management,and environmental simulation.Usually,the terrain and embedded objects like buildings are taken into consideration.A consistent model integrating these elements is vital for GIS analysis,especially if the geometry is accompanied by the topological relations between neighboring objects.Such a model allows for more efficient and errorless analysis.The memory consumption is another crucial aspect when the wide area of a city is considered-light models are highly desirable.Three methods of the terrain representation using the geometrical-topological data structure-the dual half-edge-are proposed in this article.The integration of buildings and other structures like bridges with the terrain is also presented.展开更多
Data structure is the core course for computer science majors.How to improve their‘computational thinking’ability is crucial and challenging in this course.To optimize the teaching effect,a classroom teaching model ...Data structure is the core course for computer science majors.How to improve their‘computational thinking’ability is crucial and challenging in this course.To optimize the teaching effect,a classroom teaching model is proposed.This model combines Yu Classroom online teaching tools,Chinese university MOOC teaching resources and BOPPPS model.The empirical experiment was implemented and the results showthat the comprehensive application of this teaching model helps to cultivate students’high-level cognitive ability of‘analysis,evaluation and creation’.It is proved the validity of this teaching model improve students’.展开更多
The wide application of intelligent terminals in microgrids has fueled the surge of data amount in recent years.In real-world scenarios,microgrids must store large amounts of data efficiently while also being able to ...The wide application of intelligent terminals in microgrids has fueled the surge of data amount in recent years.In real-world scenarios,microgrids must store large amounts of data efficiently while also being able to withstand malicious cyberattacks.To meet the high hardware resource requirements,address the vulnerability to network attacks and poor reliability in the tradi-tional centralized data storage schemes,this paper proposes a secure storage management method for microgrid data that considers node trust and directed acyclic graph(DAG)consensus mechanism.Firstly,the microgrid data storage model is designed based on the edge computing technology.The blockchain,deployed on the edge computing server and combined with cloud storage,ensures reliable data storage in the microgrid.Secondly,a blockchain consen-sus algorithm based on directed acyclic graph data structure is then proposed to effectively improve the data storage timeliness and avoid disadvantages in traditional blockchain topology such as long chain construction time and low consensus efficiency.Finally,considering the tolerance differences among the candidate chain-building nodes to network attacks,a hash value update mechanism of blockchain header with node trust identification to ensure data storage security is proposed.Experimental results from the microgrid data storage platform show that the proposed method can achieve a private key update time of less than 5 milliseconds.When the number of blockchain nodes is less than 25,the blockchain construction takes no more than 80 mins,and the data throughput is close to 300 kbps.Compared with the traditional chain-topology-based consensus methods that do not consider node trust,the proposed method has higher efficiency in data storage and better resistance to network attacks.展开更多
Taking autonomous driving and driverless as the research object,we discuss and define intelligent high-precision map.Intelligent high-precision map is considered as a key link of future travel,a carrier of real-time p...Taking autonomous driving and driverless as the research object,we discuss and define intelligent high-precision map.Intelligent high-precision map is considered as a key link of future travel,a carrier of real-time perception of traffic resources in the entire space-time range,and the criterion for the operation and control of the whole process of the vehicle.As a new form of map,it has distinctive features in terms of cartography theory and application requirements compared with traditional navigation electronic maps.Thus,it is necessary to analyze and discuss its key features and problems to promote the development of research and application of intelligent high-precision map.Accordingly,we propose an information transmission model based on the cartography theory and combine the wheeled robot’s control flow in practical application.Next,we put forward the data logic structure of intelligent high-precision map,and analyze its application in autonomous driving.Then,we summarize the computing mode of“Crowdsourcing+Edge-Cloud Collaborative Computing”,and carry out key technical analysis on how to improve the quality of crowdsourced data.We also analyze the effective application scenarios of intelligent high-precision map in the future.Finally,we present some thoughts and suggestions for the future development of this field.展开更多
Efficient methods for incorporating engineering experience into the intelligent generation and optimization of shear wall structures are lacking,hindering intelligent design performance assessment and enhancement.This...Efficient methods for incorporating engineering experience into the intelligent generation and optimization of shear wall structures are lacking,hindering intelligent design performance assessment and enhancement.This study introduces an assessment method used in the intelligent design and optimization of shear wall structures that effectively combines mechanical analysis and formulaic encoding of empirical rules.First,the critical information about the structure was extracted through data structuring.Second,an empirical rule assessment method was developed based on the engineer's experience and design standards to complete a preliminary assessment and screening of the structure.Subsequently,an assessment method based on mechanical performance and material consumption was used to compare different structural schemes comprehensively.Finally,the assessment effectiveness was demonstrated using a typical case.Compared to traditional assessment methods,the proposed method is more comprehensive and significantly more efficient,promoting the intelligent transformation of structural design.展开更多
With the increasing number of digital devices generating a vast amount of video data,the recognition of abnormal image patterns has become more important.Accordingly,it is necessary to develop a method that achieves t...With the increasing number of digital devices generating a vast amount of video data,the recognition of abnormal image patterns has become more important.Accordingly,it is necessary to develop a method that achieves this task using object and behavior information within video data.Existing methods for detecting abnormal behaviors only focus on simple motions,therefore they cannot determine the overall behavior occurring throughout a video.In this study,an abnormal behavior detection method that uses deep learning(DL)-based video-data structuring is proposed.Objects and motions are first extracted from continuous images by combining existing DL-based image analysis models.The weight of the continuous data pattern is then analyzed through data structuring to classify the overall video.The performance of the proposed method was evaluated using varying parameter settings,such as the size of the action clip and interval between action clips.The model achieved an accuracy of 0.9817,indicating excellent performance.Therefore,we conclude that the proposed data structuring method is useful in detecting and classifying abnormal behaviors.展开更多
Freebase is a large collaborative knowledge base and database of general, structured information for public use. Its structured data had been harvested from many sources, including individual, user-submitted wiki cont...Freebase is a large collaborative knowledge base and database of general, structured information for public use. Its structured data had been harvested from many sources, including individual, user-submitted wiki contributions. Its aim is to create a global resource so that people (and machines) can access common information more effectively which is mostly available in English. In this research work, we have tried to build the technique of creating the Freebase for Bengali language. Today the number of Bengali articles on the internet is growing day by day. So it has become a necessary to have a structured data store in Bengali. It consists of different types of concepts (topics) and relationships between those topics. These include different types of areas like popular culture (e.g. films, music, books, sports, television), location information (restaurants, geolocations, businesses), scholarly information (linguistics, biology, astronomy), birth place of (poets, politicians, actor, actress) and general knowledge (Wikipedia). It will be much more helpful for relation extraction or any kind of Natural Language Processing (NLP) works on Bengali language. In this work, we identified the technique of creating the Bengali Freebase and made a collection of Bengali data. We applied SPARQL query language to extract information from natural language (Bengali) documents such as Wikidata which is typically in RDF (Resource Description Format) triple format.展开更多
The current storage mechanism considered little in data’s keeping characteristics.These can produce fragments of various sizes among data sets.In some cases,these fragments may be serious and harm system performance....The current storage mechanism considered little in data’s keeping characteristics.These can produce fragments of various sizes among data sets.In some cases,these fragments may be serious and harm system performance.In this paper,we manage to modify the current storage mechanism.We introduce an extra storage unit called data bucket into the classical data manage architecture.Next,we modify the data manage mechanism to improve our designs.By keeping data according to their visited information,both the number of fragments and the fragment size are greatly reduced.Considering different data features and storage device conditions,we also improve the solid state drive(SSD)lifetime by keeping data into different spaces.Experiments show that our designs have a positive influence on the SSD storage density and actual service time.展开更多
As digital data circulation increases,information pollution and manipulation in journalism have become more prevalent.In this study,a new digital journalism model is designed to contribute to the solution of the main ...As digital data circulation increases,information pollution and manipulation in journalism have become more prevalent.In this study,a new digital journalism model is designed to contribute to the solution of the main current problems,such as information pollution,manipulation,and account-ability in digital journalism.The model uses blockchain technology due to its transparency,immutability,and traceability.However,it is tough to provide the mechanisms necessary for journalism,such as updating one piece of information,instantly updating all other information affected by the updated information,establishing logical relationships between news,making quick comparisons,sorting and indexing news,and keeping the changing informa-tion about the news in the system,with the blockchain data structure.For this reason,in our study,we have developed a new data structure that provides both the immutability,transparency and traceability properties of the blockchain and can support the communication mechanisms necessary for journalism.The functionality of our proposed data structure is demonstrated in terms of communication mechanisms such as mutability,context,consistency,and reliability through example scenarios.Additionally,our data structure is compared with the data structure of blockchain technology in terms of time,space,and maintenance costs.Accordingly,while the model size increases linearly in blockchain,the model’s size remains approximately constant since the structure we developed is data-independent.In this way,maintenance costs are reduced.Since our model also has an indexing mechanism,it reduces the linear time search complexity to logarithmic time.As a result,the data structure we developed is found to have higher performance than blockchain in the journalism concept.In future studies,it is planned to test all aspects of the model with a pilot application,eliminate its shortcomings,and develop a holistic approach to the root causes of the problems in the journalism focus.展开更多
Observability and traceability of developed software are crucial to its success in software engineering.Observability is the ability to comprehend a system’s internal state from the outside.Monitoring is used to dete...Observability and traceability of developed software are crucial to its success in software engineering.Observability is the ability to comprehend a system’s internal state from the outside.Monitoring is used to determine what causes system problems and why.Logs are among the most critical technology to guarantee observability and traceability.Logs are frequently used to investigate software events.In current log technologies,software events are processed independently of each other.Consequently,current logging technologies do not reveal relationships.However,system events do not occur independently of one another.With this perspective,our research has produced a new log design pattern that displays the relationships between events.In the design we have developed,the hash mechanism of blockchain technology enables the display of the logs’relationships.The created design pattern was compared to blockchain technology,demonstrating its performance through scenarios.It has been determined that the recommended log design pattern outperforms blockchain technology in terms of time and space for software engineering observability and traceability.In this context,it is anticipated that the log design pattern we provide will strengthen the methods used to monitor software projects and ensure the traceability of relationships.展开更多
Ontologies have been used for several years in life sciences to formally represent concepts and reason about knowledge bases in domains such as the semantic web, information retrieval and artificial intelligence. The ...Ontologies have been used for several years in life sciences to formally represent concepts and reason about knowledge bases in domains such as the semantic web, information retrieval and artificial intelligence. The exploration of these domains for the correspondence of semantic content requires calculation of the measure of semantic similarity between concepts. Semantic similarity is a measure on a set of documents, based on the similarity of their meanings, which refers to the similarity between two concepts belonging to one or more ontologies. The similarity between concepts is also a quantitative measure of information, calculated based on the properties of concepts and their relationships. This study proposes a method for finding similarity between concepts in two different ontologies based on feature, information content and structure. More specifically, this means proposing a hybrid method using two existing measures to find the similarity between two concepts from different ontologies based on information content and the set of common superconcepts, which represents the set of common parent concepts. We simulated our method on datasets. The results show that our measure provides similarity values that are better than those reported in the literature.展开更多
Encephalitis is a brain inflammation disease.Encephalitis can yield to seizures,motor disability,or some loss of vision or hearing.Sometimes,encepha-litis can be a life-threatening and proper diagnosis in an early stag...Encephalitis is a brain inflammation disease.Encephalitis can yield to seizures,motor disability,or some loss of vision or hearing.Sometimes,encepha-litis can be a life-threatening and proper diagnosis in an early stage is very crucial.Therefore,in this paper,we are proposing a deep learning model for computerized detection of Encephalitis from the electroencephalogram data(EEG).Also,we propose a Density-Based Clustering model to classify the distinctive waves of Encephalitis.Customary clustering models usually employ a computed single centroid virtual point to define the cluster configuration,but this single point does not contain adequate information.To precisely extract accurate inner structural data,a multiple centroids approach is employed and defined in this paper,which defines the cluster configuration by allocating weights to each state in the cluster.The multiple EEG view fuzzy learning approach incorporates data from every sin-gle view to enhance the model's clustering performance.Also a fuzzy Density-Based Clustering model with multiple centroids(FDBC)is presented.This model employs multiple real state centroids to define clusters using Partitioning Around Centroids algorithm.The Experimental results validate the medical importance of the proposed clustering model.展开更多
The potential impact of quantum computing on various industries such as finance, healthcare, cryptography, and transportation is significant;therefore, sectors face challenges in understanding where to start because o...The potential impact of quantum computing on various industries such as finance, healthcare, cryptography, and transportation is significant;therefore, sectors face challenges in understanding where to start because of the complex nature of this technology. Starting early to explore what is supposed to be done is crucial for providing sectors with the necessary knowledge, tools, and processes to keep pace with rapid advancements in quantum computing. This article emphasizes the importance of consultancy and governance solutions that aid sectors in preparing for the quantum computing revolution. The article begins by discussing the reasons why sectors need to be prepared for quantum computing and emphasizes the importance of proactive preparation. It illustrates this point by providing a real-world example of a partnership. Subsequently, the article mentioned the benefits of quantum computing readiness, including increased competitiveness, improved security, and structured data. In addition, this article discusses the steps that various sectors can take to achieve quantum readiness, considering the potential risks and opportunities in industries. The proposed solutions for achieving quantum computing readiness include establishing a quantum computing office, contracting with major quantum computing companies, and learning from quantum computing organizations. This article provides the detailed advantages and disadvantages of each of these steps and emphasizes the need to carefully evaluate their potential drawbacks to ensure that they align with the sector’s unique needs, goals, and available resources. Finally, this article proposes various solutions and recommendations for sectors to achieve quantum-computing readiness.展开更多
Reaction products of 2,4,6-tris(4-phenyl-phenoxy)-1,3,5-triazine derived from 4-phenylphenol cyanate ester and phenyl glycidyl ether were analyzed. In addition to an isocyanurate compound and an oxazolidone compound w...Reaction products of 2,4,6-tris(4-phenyl-phenoxy)-1,3,5-triazine derived from 4-phenylphenol cyanate ester and phenyl glycidyl ether were analyzed. In addition to an isocyanurate compound and an oxazolidone compound which were well known as reaction products of cyanate esters and epoxy resins, compounds with hybrid ring structure of cyanurate/isocyanurate were determined. Gibbs free energies of the compound having hybrid ring structure of cyanurate/isocyanurate with two isocyanurate moiety were found to be lower than that of the compound with cyanurate ring structure through calculations. Calculation data supported the existence of hybrid ring structure of cy-anurate/isocyanurate. It was revealed that isomerization from cyanurate to isocyanurate occurs via hybrid ring structure of cyanurate/isocyanurate in the reaction of aryl cyanurate and epoxy.展开更多
基金thankful to the Dean of Scientific Research at Najran University for funding this work under the Research Groups Funding Program,Grant Code(NU/RG/SERC/12/6).
文摘Data protection in databases is critical for any organization,as unauthorized access or manipulation can have severe negative consequences.Intrusion detection systems are essential for keeping databases secure.Advancements in technology will lead to significant changes in the medical field,improving healthcare services through real-time information sharing.However,reliability and consistency still need to be solved.Safeguards against cyber-attacks are necessary due to the risk of unauthorized access to sensitive information and potential data corruption.Dis-ruptions to data items can propagate throughout the database,making it crucial to reverse fraudulent transactions without delay,especially in the healthcare industry,where real-time data access is vital.This research presents a role-based access control architecture for an anomaly detection technique.Additionally,the Structured Query Language(SQL)queries are stored in a new data structure called Pentaplet.These pentaplets allow us to maintain the correlation between SQL statements within the same transaction by employing the transaction-log entry information,thereby increasing detection accuracy,particularly for individuals within the company exhibiting unusual behavior.To identify anomalous queries,this system employs a supervised machine learning technique called Support Vector Machine(SVM).According to experimental findings,the proposed model performed well in terms of detection accuracy,achieving 99.92%through SVM with One Hot Encoding and Principal Component Analysis(PCA).
基金the research outcomes of a blended top-tier undergraduate course in Henan ProvinceData Structures and Algorithms(Jiao Gao[2022]324)a research-based teaching demonstration course in Henan Province-Data Structures and Algorithms(Jiao Gao[2023]36)a model course of ideological and political education of Anyang Normal University-Data Structures and Algorithms(No.YBKC20210012)。
文摘To capitalize on the primary role of major course teaching and to facilitate students’understanding of abstract concepts in the data structure course,it is essential to increase their interest in learning and develop case studies that highlight fine traditional culture.By incorporating these culture-rich case studies into classroom instruction,we employ a project-driven teaching approach.This not only allows students to master professional knowledge,but also enhances their abilities to solve specific engineering problems,ultimately fostering cultural confidence.Over the past few years,during which educational reforms have been conducted for trial runs,the feasibility and effectiveness of these reform schemes have been demonstrated.
基金supported by NSF B55101680,NTIF B2090571,B2110140,SCUT x2rjD2116860,Y1080170,Y1090160,Y1100030,Y1100050,Y1110020 and S1010561121,G101056137
文摘"Data Structure and Algorithm",which is an important major subject in computer science,has a lot of problems in teaching activity.This paper introduces and analyzes the situation and problems in this course study.A "programming factory" method is then brought out which is indeed a practice-oriented platform of the teachingstudy process.Good results are obtained by this creative method.
基金supported in part by grants from Qinglan Project of Jiangsu Province(2020)High-level demonstration construction project of Sino-foreign cooperation in running schools in Jiangsu Province,Jiangsu Higher Educational Technology Research Association-2019 Higher Education Informatization Research Project(2019JSETKT035)Ideological and Political Special Project of Philosophy and Social Science Research Projects in Colleges and Universities in 2019(2019SJB422).
文摘For the course of“Data Structures”,this paper introduces the importance and existing problems of the data structure course.Through the literature and the current teaching in major universities,the existing teaching methods and their disadvantages are analyzed.The authors put forward teaching reform suggestions and designed an online course platform.
基金supported by ARO W911NF-13-1-0421(MURI)NSF CNS-1422594NSF CNS-1505664.
文摘In the wake of the research community gaining deep understanding about control-hijacking attacks,data-oriented attacks have emerged.Among data-oriented attacks,data structure manipulation attack(DSMA)is a major category.Pioneering research was conducted and shows that DSMA is able to circumvent the most effective defenses against control-hijacking attacks-DEP,ASLR and CFI.Up to this day,only two defense techniques have demonstrated their effectiveness:Data Flow Integrity(DFI)and Data Structure Layout Randomization(DSLR).However,DFI has high performance overhead,and dynamic DSLR has two main limitations.L-1:Randomizing a large set of data structures will significantly affect the performance.L-2:To be practical,only a fixed sub-set of data structures are randomized.In the case that the data structures targeted by an attack are not covered,dynamic DSLR is essentially noneffective.To address these two limitations,we propose a novel technique,feedback-control-based adaptive DSLR and build a system named SALADSPlus.SALADSPlus seeks to optimize the trade-off between security and cost through feedback control.Using a novel feedback-control-based adaptive algorithm extended from the Upper Confidence Bound(UCB)algorithm,the defender(controller)uses the feedbacks(cost-effectiveness)from previous randomization cycles to adaptively choose the set of data structures to randomize(the next action).Different from dynamic DSLR,the set of randomized data structures are adaptively changed based on the feedbacks.To obtain the feedbacks,SALADSPlus inserts canary in each data structure at the time of compilation.We have implemented SALADSPlus based on gcc-4.5.0.Experimental results show that the runtime overheads are 1.8%,3.7%,and 5.3% when the randomization cycles are selected as 10s,5s,and 1s respectively.
基金This research is supported by the Dutch Technology Foundation STW,which is part of the Netherlands Organization for Scientific Research(NWO),and which is partly funded by the Ministry of Economic Affairs(project codes:11300 and 11185).
文摘For storing and modeling three-dimensional(3D)topographic objects(e.g.buildings,roads,dykes,and the terrain),tetrahedralizations have been proposed as an alternative to boundary representations.While in theory they have several advantages,current implementations are either not space efficient or do not store topological relationships(which makes spatial analysis and updating slow,or require the use of an expensive 3D spatial index).We discuss in this paper an alternative data structure for storing tetrahedralizations in a database management system(DBMS).It is based on the idea of storing only the vertices and stars of edges;triangles and tetrahedra are represented implicitly.It has been used previously in main memory,but not in a DBMS.We describe how to modify it to obtain an efficient implementation in a DBMS,and we describe how it can be used for modeling 3D topography.As we demonstrate with different real-world examples,the structure is compacter than known alternatives,it permits us to store attributes for any primitives,and has the added benefit of being topological,which permits us to query it efficiently.The structure can be easily implemented in most DBMS(we describe our implementation in PostgreSQL),and we present some of the engineering choices we made for the implementation.
基金The authors would like to thank sponsors for their support:research on the dual half-edge data structure was funded by the EPSRC and Ordnance Survey,UK(New CASE Award,2006−2010)Technical University of Malaysia and the Ministry of Science,Technology and Innovation,Malaysia(eScience 01-01-06-SF1046,Vot No.4S049)(2011−2014).
文摘3D city models are widely used in many disciplines and applications,such as urban planning,disaster management,and environmental simulation.Usually,the terrain and embedded objects like buildings are taken into consideration.A consistent model integrating these elements is vital for GIS analysis,especially if the geometry is accompanied by the topological relations between neighboring objects.Such a model allows for more efficient and errorless analysis.The memory consumption is another crucial aspect when the wide area of a city is considered-light models are highly desirable.Three methods of the terrain representation using the geometrical-topological data structure-the dual half-edge-are proposed in this article.The integration of buildings and other structures like bridges with the terrain is also presented.
文摘Data structure is the core course for computer science majors.How to improve their‘computational thinking’ability is crucial and challenging in this course.To optimize the teaching effect,a classroom teaching model is proposed.This model combines Yu Classroom online teaching tools,Chinese university MOOC teaching resources and BOPPPS model.The empirical experiment was implemented and the results showthat the comprehensive application of this teaching model helps to cultivate students’high-level cognitive ability of‘analysis,evaluation and creation’.It is proved the validity of this teaching model improve students’.
文摘The wide application of intelligent terminals in microgrids has fueled the surge of data amount in recent years.In real-world scenarios,microgrids must store large amounts of data efficiently while also being able to withstand malicious cyberattacks.To meet the high hardware resource requirements,address the vulnerability to network attacks and poor reliability in the tradi-tional centralized data storage schemes,this paper proposes a secure storage management method for microgrid data that considers node trust and directed acyclic graph(DAG)consensus mechanism.Firstly,the microgrid data storage model is designed based on the edge computing technology.The blockchain,deployed on the edge computing server and combined with cloud storage,ensures reliable data storage in the microgrid.Secondly,a blockchain consen-sus algorithm based on directed acyclic graph data structure is then proposed to effectively improve the data storage timeliness and avoid disadvantages in traditional blockchain topology such as long chain construction time and low consensus efficiency.Finally,considering the tolerance differences among the candidate chain-building nodes to network attacks,a hash value update mechanism of blockchain header with node trust identification to ensure data storage security is proposed.Experimental results from the microgrid data storage platform show that the proposed method can achieve a private key update time of less than 5 milliseconds.When the number of blockchain nodes is less than 25,the blockchain construction takes no more than 80 mins,and the data throughput is close to 300 kbps.Compared with the traditional chain-topology-based consensus methods that do not consider node trust,the proposed method has higher efficiency in data storage and better resistance to network attacks.
基金National Key Research and Development Program(No.2018YFB1305001)Major Consulting and Research Project of Chinese Academy of Engineering(No.2018-ZD-02-07)。
文摘Taking autonomous driving and driverless as the research object,we discuss and define intelligent high-precision map.Intelligent high-precision map is considered as a key link of future travel,a carrier of real-time perception of traffic resources in the entire space-time range,and the criterion for the operation and control of the whole process of the vehicle.As a new form of map,it has distinctive features in terms of cartography theory and application requirements compared with traditional navigation electronic maps.Thus,it is necessary to analyze and discuss its key features and problems to promote the development of research and application of intelligent high-precision map.Accordingly,we propose an information transmission model based on the cartography theory and combine the wheeled robot’s control flow in practical application.Next,we put forward the data logic structure of intelligent high-precision map,and analyze its application in autonomous driving.Then,we summarize the computing mode of“Crowdsourcing+Edge-Cloud Collaborative Computing”,and carry out key technical analysis on how to improve the quality of crowdsourced data.We also analyze the effective application scenarios of intelligent high-precision map in the future.Finally,we present some thoughts and suggestions for the future development of this field.
文摘Efficient methods for incorporating engineering experience into the intelligent generation and optimization of shear wall structures are lacking,hindering intelligent design performance assessment and enhancement.This study introduces an assessment method used in the intelligent design and optimization of shear wall structures that effectively combines mechanical analysis and formulaic encoding of empirical rules.First,the critical information about the structure was extracted through data structuring.Second,an empirical rule assessment method was developed based on the engineer's experience and design standards to complete a preliminary assessment and screening of the structure.Subsequently,an assessment method based on mechanical performance and material consumption was used to compare different structural schemes comprehensively.Finally,the assessment effectiveness was demonstrated using a typical case.Compared to traditional assessment methods,the proposed method is more comprehensive and significantly more efficient,promoting the intelligent transformation of structural design.
基金supported by Basic Science Research Program through the NationalResearch Foundation of Korea (NRF)funded by the Ministry of Education (2020R1A6A1A03040583).
文摘With the increasing number of digital devices generating a vast amount of video data,the recognition of abnormal image patterns has become more important.Accordingly,it is necessary to develop a method that achieves this task using object and behavior information within video data.Existing methods for detecting abnormal behaviors only focus on simple motions,therefore they cannot determine the overall behavior occurring throughout a video.In this study,an abnormal behavior detection method that uses deep learning(DL)-based video-data structuring is proposed.Objects and motions are first extracted from continuous images by combining existing DL-based image analysis models.The weight of the continuous data pattern is then analyzed through data structuring to classify the overall video.The performance of the proposed method was evaluated using varying parameter settings,such as the size of the action clip and interval between action clips.The model achieved an accuracy of 0.9817,indicating excellent performance.Therefore,we conclude that the proposed data structuring method is useful in detecting and classifying abnormal behaviors.
文摘Freebase is a large collaborative knowledge base and database of general, structured information for public use. Its structured data had been harvested from many sources, including individual, user-submitted wiki contributions. Its aim is to create a global resource so that people (and machines) can access common information more effectively which is mostly available in English. In this research work, we have tried to build the technique of creating the Freebase for Bengali language. Today the number of Bengali articles on the internet is growing day by day. So it has become a necessary to have a structured data store in Bengali. It consists of different types of concepts (topics) and relationships between those topics. These include different types of areas like popular culture (e.g. films, music, books, sports, television), location information (restaurants, geolocations, businesses), scholarly information (linguistics, biology, astronomy), birth place of (poets, politicians, actor, actress) and general knowledge (Wikipedia). It will be much more helpful for relation extraction or any kind of Natural Language Processing (NLP) works on Bengali language. In this work, we identified the technique of creating the Bengali Freebase and made a collection of Bengali data. We applied SPARQL query language to extract information from natural language (Bengali) documents such as Wikidata which is typically in RDF (Resource Description Format) triple format.
基金partly supported by the National Natural Science Foundation of China under Grant No.62072076the Research Fund of National Key Laboratory of Computer Architecture under Grant No.CARCH201811。
文摘The current storage mechanism considered little in data’s keeping characteristics.These can produce fragments of various sizes among data sets.In some cases,these fragments may be serious and harm system performance.In this paper,we manage to modify the current storage mechanism.We introduce an extra storage unit called data bucket into the classical data manage architecture.Next,we modify the data manage mechanism to improve our designs.By keeping data according to their visited information,both the number of fragments and the fragment size are greatly reduced.Considering different data features and storage device conditions,we also improve the solid state drive(SSD)lifetime by keeping data into different spaces.Experiments show that our designs have a positive influence on the SSD storage density and actual service time.
文摘As digital data circulation increases,information pollution and manipulation in journalism have become more prevalent.In this study,a new digital journalism model is designed to contribute to the solution of the main current problems,such as information pollution,manipulation,and account-ability in digital journalism.The model uses blockchain technology due to its transparency,immutability,and traceability.However,it is tough to provide the mechanisms necessary for journalism,such as updating one piece of information,instantly updating all other information affected by the updated information,establishing logical relationships between news,making quick comparisons,sorting and indexing news,and keeping the changing informa-tion about the news in the system,with the blockchain data structure.For this reason,in our study,we have developed a new data structure that provides both the immutability,transparency and traceability properties of the blockchain and can support the communication mechanisms necessary for journalism.The functionality of our proposed data structure is demonstrated in terms of communication mechanisms such as mutability,context,consistency,and reliability through example scenarios.Additionally,our data structure is compared with the data structure of blockchain technology in terms of time,space,and maintenance costs.Accordingly,while the model size increases linearly in blockchain,the model’s size remains approximately constant since the structure we developed is data-independent.In this way,maintenance costs are reduced.Since our model also has an indexing mechanism,it reduces the linear time search complexity to logarithmic time.As a result,the data structure we developed is found to have higher performance than blockchain in the journalism concept.In future studies,it is planned to test all aspects of the model with a pilot application,eliminate its shortcomings,and develop a holistic approach to the root causes of the problems in the journalism focus.
文摘Observability and traceability of developed software are crucial to its success in software engineering.Observability is the ability to comprehend a system’s internal state from the outside.Monitoring is used to determine what causes system problems and why.Logs are among the most critical technology to guarantee observability and traceability.Logs are frequently used to investigate software events.In current log technologies,software events are processed independently of each other.Consequently,current logging technologies do not reveal relationships.However,system events do not occur independently of one another.With this perspective,our research has produced a new log design pattern that displays the relationships between events.In the design we have developed,the hash mechanism of blockchain technology enables the display of the logs’relationships.The created design pattern was compared to blockchain technology,demonstrating its performance through scenarios.It has been determined that the recommended log design pattern outperforms blockchain technology in terms of time and space for software engineering observability and traceability.In this context,it is anticipated that the log design pattern we provide will strengthen the methods used to monitor software projects and ensure the traceability of relationships.
文摘Ontologies have been used for several years in life sciences to formally represent concepts and reason about knowledge bases in domains such as the semantic web, information retrieval and artificial intelligence. The exploration of these domains for the correspondence of semantic content requires calculation of the measure of semantic similarity between concepts. Semantic similarity is a measure on a set of documents, based on the similarity of their meanings, which refers to the similarity between two concepts belonging to one or more ontologies. The similarity between concepts is also a quantitative measure of information, calculated based on the properties of concepts and their relationships. This study proposes a method for finding similarity between concepts in two different ontologies based on feature, information content and structure. More specifically, this means proposing a hybrid method using two existing measures to find the similarity between two concepts from different ontologies based on information content and the set of common superconcepts, which represents the set of common parent concepts. We simulated our method on datasets. The results show that our measure provides similarity values that are better than those reported in the literature.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R113)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Encephalitis is a brain inflammation disease.Encephalitis can yield to seizures,motor disability,or some loss of vision or hearing.Sometimes,encepha-litis can be a life-threatening and proper diagnosis in an early stage is very crucial.Therefore,in this paper,we are proposing a deep learning model for computerized detection of Encephalitis from the electroencephalogram data(EEG).Also,we propose a Density-Based Clustering model to classify the distinctive waves of Encephalitis.Customary clustering models usually employ a computed single centroid virtual point to define the cluster configuration,but this single point does not contain adequate information.To precisely extract accurate inner structural data,a multiple centroids approach is employed and defined in this paper,which defines the cluster configuration by allocating weights to each state in the cluster.The multiple EEG view fuzzy learning approach incorporates data from every sin-gle view to enhance the model's clustering performance.Also a fuzzy Density-Based Clustering model with multiple centroids(FDBC)is presented.This model employs multiple real state centroids to define clusters using Partitioning Around Centroids algorithm.The Experimental results validate the medical importance of the proposed clustering model.
文摘The potential impact of quantum computing on various industries such as finance, healthcare, cryptography, and transportation is significant;therefore, sectors face challenges in understanding where to start because of the complex nature of this technology. Starting early to explore what is supposed to be done is crucial for providing sectors with the necessary knowledge, tools, and processes to keep pace with rapid advancements in quantum computing. This article emphasizes the importance of consultancy and governance solutions that aid sectors in preparing for the quantum computing revolution. The article begins by discussing the reasons why sectors need to be prepared for quantum computing and emphasizes the importance of proactive preparation. It illustrates this point by providing a real-world example of a partnership. Subsequently, the article mentioned the benefits of quantum computing readiness, including increased competitiveness, improved security, and structured data. In addition, this article discusses the steps that various sectors can take to achieve quantum readiness, considering the potential risks and opportunities in industries. The proposed solutions for achieving quantum computing readiness include establishing a quantum computing office, contracting with major quantum computing companies, and learning from quantum computing organizations. This article provides the detailed advantages and disadvantages of each of these steps and emphasizes the need to carefully evaluate their potential drawbacks to ensure that they align with the sector’s unique needs, goals, and available resources. Finally, this article proposes various solutions and recommendations for sectors to achieve quantum-computing readiness.
文摘Reaction products of 2,4,6-tris(4-phenyl-phenoxy)-1,3,5-triazine derived from 4-phenylphenol cyanate ester and phenyl glycidyl ether were analyzed. In addition to an isocyanurate compound and an oxazolidone compound which were well known as reaction products of cyanate esters and epoxy resins, compounds with hybrid ring structure of cyanurate/isocyanurate were determined. Gibbs free energies of the compound having hybrid ring structure of cyanurate/isocyanurate with two isocyanurate moiety were found to be lower than that of the compound with cyanurate ring structure through calculations. Calculation data supported the existence of hybrid ring structure of cy-anurate/isocyanurate. It was revealed that isomerization from cyanurate to isocyanurate occurs via hybrid ring structure of cyanurate/isocyanurate in the reaction of aryl cyanurate and epoxy.