Purpose: This paper aims to assess if the extent of openness and the coverage of data sets released by European governments have a significant impact on citizen trust in public institutions.Design/methodology/approach...Purpose: This paper aims to assess if the extent of openness and the coverage of data sets released by European governments have a significant impact on citizen trust in public institutions.Design/methodology/approach: Data for openness and coverage have been collected from the Open Data Inventory 2018(ODIN), by Open Data Watch;institutional trust is built up as a formative construct based on the European Social Survey(ESS), Round 9. The relations between the open government data features and trust have been tested on the basis of structural equation modelling(SEM).Findings: The paper reveals that as European governments improve data openness, disaggregation, and time coverage, people tend to trust them more. However, the size of the effect is still small and, comparatively, data coverage effect on citizens' confidence is more than twice than the impact of openness.Research limitations: This paper analyzes the causal effect of Open Government Data(OGD) features captured in a certain moment of time. In upcoming years, as OGD is implemented and a more consistent effect on people is expected, time series analysis will provide with a deeper insight.Practical implications: Public officers should continue working in the development of a technological framework that contributes to make OGD truly open. They should improve the added value of the increasing amount of open data currently available in order to boost internal and external innovations valuable both for public agencies and citizens.Originality/value: In a field of knowledge with little quantitative empirical evidence, this paper provides updated support for the positive effect of OGD strategies and it also points out areas of improvement in terms of the value that citizens can get from OGD coverage and openness.展开更多
Lane change prediction is critical for crash avoidance but challenging as it requires the understanding of the instantaneous driving environment.With cutting-edge artificial intelligence and sensing technologies,auton...Lane change prediction is critical for crash avoidance but challenging as it requires the understanding of the instantaneous driving environment.With cutting-edge artificial intelligence and sensing technologies,autonomous vehicles(AVs)are expected to have exceptional perception systems to capture instantaneously their driving environments for predicting lane changes.By exploring the Waymo open motion dataset,this study proposes a framework to explore autonomous driving data and investigate lane change behaviors.In the framework,this study develops a Long Short-Term Memory(LSTM)model to predict lane changing behaviors.The concept of Vehicle Operating Space(VOS)is introduced to quantify a vehicle's instantaneous driving environment as an important indicator used to predict vehicle lane changes.To examine the robustness of the model,a series of sensitivity analysis are conducted by varying the feature selection,prediction horizon,and training data balancing ratios.The test results show that including VOS into modeling can speed up the loss decay in the training process and lead to higher accuracy and recall for predicting lane-change behaviors.This study offers an example along with a methodological framework for transportation researchers to use emerging autonomous driving data to investigate driving behaviors and traffic environments.展开更多
This paper aims to present the experience gathered in the Italian alpine city of Bolzano within the project“Bolzano Traffic”whose goal is the introduction of an experimental open ITS platform for local service provi...This paper aims to present the experience gathered in the Italian alpine city of Bolzano within the project“Bolzano Traffic”whose goal is the introduction of an experimental open ITS platform for local service providers,fostering the diffusion of advanced traveller information services and the future deployment of cooperative mobility systems in the region.Several end-users applications targeted to the needs of different user groups have been developed in collaboration with local companies and research centers;a partnership with the EU Co-Cities project has been activated as well.The implemented services rely on real-time travel and traffic information collected by urban traffic monitoring systems or published by local stakeholders(e.g.public transportation operators).An active involvement of end-users,who have recently started testing these demo applications for free,is actually on-going.展开更多
Purpose:To develop a set of metrics and identify criteria for assessing the functionality of LOD KOS products while providing common guiding principles that can be used by LOD KOS producers and users to maximize the f...Purpose:To develop a set of metrics and identify criteria for assessing the functionality of LOD KOS products while providing common guiding principles that can be used by LOD KOS producers and users to maximize the functions and usages of LOD KOS products.Design/methodology/approach:Data collection and analysis were conducted at three time periods in 2015–16,2017 and 2019.The sample data used in the comprehensive data analysis comprises all datasets tagged as types of KOS in the Datahub and extracted through their respective SPARQL endpoints.A comparative study of the LOD KOS collected from terminology services Linked Open Vocabularies(LOV)and BioPortal was also performed.Findings:The study proposes a set of Functional,Impactful and Transformable(FIT)metrics for LOD KOS as value vocabularies.The FAIR principles,with additional recommendations,are presented for LOD KOS as open data.Research limitations:The metrics need to be further tested and aligned with the best practices and international standards of both open data and various types of KOS.Practical implications:Assessment performed with FAIR and FIT metrics support the creation and delivery of user-friendly,discoverable and interoperable LOD KOS datasets which can be used for innovative applications,act as a knowledge base,become a foundation of semantic analysis and entity extractions and enhance research in science and the humanities.Originality/value:Our research provides best practice guidelines for LOD KOS as value vocabularies.展开更多
Purpose: Our work seeks to overcome data quality issues related to incomplete author affiliation data in bibliographic records in order to support accurate and reliable measurement of international research collaborat...Purpose: Our work seeks to overcome data quality issues related to incomplete author affiliation data in bibliographic records in order to support accurate and reliable measurement of international research collaboration(IRC).Design/methodology/approch: We propose, implement, and evaluate a method that leverages the Web-based knowledge graph Wikidata to resolve publication affiliation data to particular countries. The method is tested with general and domain-specific data sets.Findings: Our evaluation covers the magnitude of improvement, accuracy, and consistency. Results suggest the method is beneficial, reliable, and consistent, and thus a viable and improved approach to measuring IRC.Research limitations: Though our evaluation suggests the method works with both general and domain-specific bibliographic data sets, it may perform differently with data sets not tested here. Further limitations stem from the use of the R programming language and R libraries for country identification as well as imbalanced data coverage and quality in Wikidata that may also change over time.Practical implications: The new method helps to increase the accuracy in IRC studies and provides a basis for further development into a general tool that enriches bibliographic data using the Wikidata knowledge graph.Originality: This is the first attempt to enrich bibliographic data using a peer-produced, Webbased knowledge graph like Wikidata.展开更多
Purpose: The purpose of this exploratory study is to provide modern local governments with potential use cases for their open data, in order to help inform related future policies and decision-making. The concrete con...Purpose: The purpose of this exploratory study is to provide modern local governments with potential use cases for their open data, in order to help inform related future policies and decision-making. The concrete context was that of the Vaxjo municipality located in southeastern Sweden.Design/methodology/approach: The methodology was two-fold: 1) a survey of potential end users(n=151) from a local university;and, 2) analysis of survey results using a theoretical model regarding local strategies for implementing open government data.Findings: Most datasets predicted to be useful were on: sustainability and environment;preschool and school;municipality and politics. The use context given is primarily research and development, informing policies and decision making;but also education, informing personal choices, informing citizens and creating services based on open data. Not the least, the need for educating target user groups on data literacy emerged. A tentative pattern comprising a technical perspective on open data and a social perspective on open government was identified. Research limitations: In line with available funding, the nature of the study was exploratory and implemented as an anonymous web-based survey of employees and students at the local university. Further research involving(qualitative) surveys with all stakeholders would allow for creating a more complete picture of the matter. Practical implications: The study determines potential use cases and use contexts for open government data, in order to help inform related future policies and decision-making.Originality/value: Modern local governments, and especially in Sweden, are faced with a challenge of how to make their data open, how to learn about which types of data will be most relevant for their end users and what will be different societal purposes. The paper contributes to knowledge that modern local governments can resort to when it comes to attitudes of local citizens to open government data in the context of an open government data perspective.展开更多
Systematically analyze the composition of post-marketing adverse drug reaction data and the open mode in the EU, and summarize its characteristics. EU post-marketing ADR data is open to six categories of stakeholders:...Systematically analyze the composition of post-marketing adverse drug reaction data and the open mode in the EU, and summarize its characteristics. EU post-marketing ADR data is open to six categories of stakeholders: EMA, EC, medicines regulatory authorities in EEA member states, healthcare professionals and the public, Marketing Authorization Holders, academia, WHO and medicines regulatory authorities in third countries. The EU has implemented hierarchical opening for ADRs, with different levels containing different data and facing different stakeholders. Openness is divided into active and passive openness. In opening up data, the EU complies with relevant personal data protection laws to protect the privacy of individuals. The EU’s post-marketing adverse drug reaction data openness is characterized by a combination of data openness and privacy protection, active and passive openness, and a hierarchy of data openness. It is hoped that this can provide a reference for the opening up of post-marketing adverse drug reaction data in China.展开更多
This research describes a quantitative,rapid,and low-cost methodology for debris flow susceptibility evaluation at the basin scale using open-access data and geodatabases.The proposed approach can aid decision makers ...This research describes a quantitative,rapid,and low-cost methodology for debris flow susceptibility evaluation at the basin scale using open-access data and geodatabases.The proposed approach can aid decision makers in land management and territorial planning,by first screening for areas with a higher debris flow susceptibility.Five environmental predisposing factors,namely,bedrock lithology,fracture network,quaternary deposits,slope inclination,and hydrographic network,were selected as independent parameters and their mutual interactions were described and quantified using the Rock Engineering System(RES)methodology.For each parameter,specific indexes were proposed,aiming to provide a final synthetic and representative index of debris flow susceptibility at the basin scale.The methodology was tested in four basins located in the Upper Susa Valley(NW Italian Alps)where debris flow events are the predominant natural hazard.The proposed matrix can represent a useful standardized tool,universally applicable,since it is independent of type and characteristic of the basin.展开更多
In recent years, transparency and accountability seem to find new impulse, with the development of ICT (information and communication technology) and the prospective of open data that invest the public system at a n...In recent years, transparency and accountability seem to find new impulse, with the development of ICT (information and communication technology) and the prospective of open data that invest the public system at a national and supranational level. Public institutions tend to make available to the public, more data and information concerning the administration, the manner of use of public goods and resources. At the same time, each institution is called upon to deal with the demand of transparency and participation by citizens who increasingly use Internet 2.0 and social media. After a reflection on how public administrations acted in the phase of Web 1.0 to practice transparency and accountability in terms of communication, this paper considers the elements of continuity and the new opportunities linked to the advent of Web 2.0 and open data. At the end of this analysis, the focus is on the strengths and weaknesses of this process, with a particular attention to the role of the public communication.展开更多
The rising awareness of environmental issues and the increase of renewable energy sources(RESs)has led to a shift in energy production toward RES,such as photovoltaic(PV)systems,and toward a distributed generation(DG)...The rising awareness of environmental issues and the increase of renewable energy sources(RESs)has led to a shift in energy production toward RES,such as photovoltaic(PV)systems,and toward a distributed generation(DG)model of energy production that requires systems in which energy is generated,stored,and consumed locally.In this work,we present a methodology that integrates geographic information system(GIS)-based PV potential assessment procedures with models for the estimation of both energy generation and consumption profiles.In particular,we have created an innovative infrastructure that co-simulates PV integration on building rooftops together with an analysis of households’electricity demand.Our model relies on high spatiotemporal resolution and considers both shadowing effects and real-sky conditions for solar radiation estimation.It integrates methodologies to estimate energy demand with a high temporal resolution,accounting for realistic populations with realistic consumption profiles.Such a solution enables concrete recommendations to be drawn in order to promote an understanding of urban energy systems and the integration of RES in the context of future smart cities.The proposed methodology is tested and validated within the municipality of Turin,Italy.For the whole municipality,we estimate both the electricity absorbed from the residential sector(simulating a realistic population)and the electrical energy that could be produced by installing PV systems on buildings’rooftops(considering two different scenarios,with the former using only the rooftops of residential buildings and the latter using all available rooftops).The capabilities of the platform are explored through an in-depth analysis of the obtained results.Generated power and energy profiles are presented,emphasizing the flexibility of the resolution of the spatial and temporal results.Additional energy indicators are presented for the self-consumption of produced energy and the avoidance of CO_(2) emissions.展开更多
Lithium-ion batteries are key drivers of the renewable energy revolution,bolstered by progress in battery design,modelling,and management.Yet,achieving high-performance battery health prognostics is a significant chal...Lithium-ion batteries are key drivers of the renewable energy revolution,bolstered by progress in battery design,modelling,and management.Yet,achieving high-performance battery health prognostics is a significant challenge.With the availability of open data and software,coupled with automated simulations,deep learning has become an integral component of battery health prognostics.We offer a comprehensive overview of potential deep learning techniques specifically designed for modeling and forecasting the dynamics of multiphysics and multiscale battery systems.Following this,we provide a concise summary of publicly available lithium-ion battery test and cycle datasets.By providing illustrative examples,we emphasize the efficacy of five techniques capable of enhancing deep learning for accurate battery state prediction and health-focused management.Each of these techniques offers unique benefits.(1)Transformer models address challenges using self-attention mechanisms and positional encoding methods.(2) Transfer learning improves learning tasks within a target domain by leveraging knowledge from a source domain.(3) Physics-informed learning uses prior knowledge to enhance learning algorithms.(4)Generative adversarial networks(GANs) earn praise for their ability to generate diverse and high-quality outputs,exhibiting outstanding performance with complex datasets.(5) Deep reinforcement learning enables an agent to make optimal decisions through continuous interactions with its environment,thus maximizing cumulative rewards.In this Review,we highlight examples that employ these techniques for battery health prognostics,summarizing both their challenges and opportunities.These methodologies offer promising prospects for researchers and industry professionals,enabling the creation of specialized network architectures that autonomously extract features,especially for long-range spatial-temporal connections across extended timescales.The outcomes could include improved accuracy,faster training,and enhanced generalization.展开更多
In this study,we conducted a search for dark matter using a part of the data recorded by the CMS experiment during run-I of the LHC in 2012 with a center of mass energy of 8 TeV and an integrated luminosity of 11.6 fb...In this study,we conducted a search for dark matter using a part of the data recorded by the CMS experiment during run-I of the LHC in 2012 with a center of mass energy of 8 TeV and an integrated luminosity of 11.6 fb−1.These data were gathered from the CMS open data.Dark matter,in the framework of the simplified model(mono-Z′),can be produced from proton-proton collisions in association with a new hypothetical gauge boson,Z′.Thus,the search was conducted in the dimuon plus large missing transverse momentum channel.One benchmark scenario of mono-Z′,which is known as light vector,was used for interpreting the CMS open data.No evidence of dark matter was observed,and exclusion limits were set on the masses of dark matter and Z′at 95%confidence level.展开更多
The"omics"revolution has transformed the biomedical research landscape by equipping scientists with the ability to interrogate complex biological phenomenon and disease processes at an unprecedented level.Th...The"omics"revolution has transformed the biomedical research landscape by equipping scientists with the ability to interrogate complex biological phenomenon and disease processes at an unprecedented level.The volume of"big"data generated by the different omics studies such as genomics,transcriptomics,proteomics,and metabolomics has led to the concurrent development of computational tools to enable in silico analysis and aid data deconvolution.Considering the intensive resources and high costs required to generate and analyze big data,there has been centralized,collaborative efforts to make the data and analysis tools freely available as"Open Source,"to benefit the wider research community.Pancreatology research studies have contributed to this"big data rush"and have additionally benefitted from utilizing the open source data as evidenced by the increasing number of new research findings and publications that stem from such data.In this review,we briefly introduce the evolution of open source omics data,data types,the"FAIR"guiding principles for data management and reuse,and centralized platforms that enable free and fair data accessibility,availability,and provide tools for omics data analysis.We illustrate,through the case study of our own experience in mining pancreatitis omics data,the power of repurposing open source data to answer translationally relevant questions in pancreas research.展开更多
Data exploration,usually the first step in data analysis,is a useful method to tackle challenges caused by big geoscience data.It conducts quick analysis of data,investigates the patterns,and generates/refines researc...Data exploration,usually the first step in data analysis,is a useful method to tackle challenges caused by big geoscience data.It conducts quick analysis of data,investigates the patterns,and generates/refines research questions to guide advanced statistics and machine learning algorithms.The background of this work is the open mineral data provided by several sources,and the focus is different types of associations in mineral properties and occurrences.Researchers in mineralogy have been applying different techniques for exploring such associations.Although the explored associations can lead to new scientific insights that contribute to crystallography,mineralogy,and geochemistry,the exploration process is often daunting due to the wide range and complexity of factors involved.In this study,our purpose is implementing a visualization tool based on the adjacency matrix for a variety of datasets and testing its utility for quick exploration of association patterns in mineral data.Algorithms,software packages,and use cases have been developed to process a variety of mineral data.The results demonstrate the efficiency of adjacency matrix in real-world usage.All the developed works of this study are open source and open access.展开更多
After a systematic review of 38 current intelligent city evaluation systems (ICESs) from around the world, this research analyzes the secondary and tertiary indicators of these 38 ICESs from the perspec- tives of sc...After a systematic review of 38 current intelligent city evaluation systems (ICESs) from around the world, this research analyzes the secondary and tertiary indicators of these 38 ICESs from the perspec- tives of scale structuring, approaches and indicator selection, and determines their common base. From this base, the fundamentals of the City Intelligence Quotient (City IOD Evaluation System are developed and five dimensions are selected after a clustering analysis. The basic version, City IQ Evaluation System 1.0, involves 275 experts from 14 high-end research institutions, which include the Chinese Academy of Engineering, the National Academy of Science and Engineering (Germany), the Royal Swedish Academy of Engineering Sciences, the Planning Management Center of the Ministry of Housing and Urban-Rural Development of China, and the Development Research Center of the State Council of China. City IQ Evaluation System 2.0 is further developed, with improvements in its universality, openness, and dy- namic adjustment capability. After employing deviation evaluation methods in the IQ assessment, City IQ Evaluation System 3.0 was conceived. The research team has conducted a repeated assessment of 41 intelligent cities around the world using City IQ Evaluation System 3.0. The results have proved that the City IQ Evaluation System, developed on the basis of intelligent life, features more rational indicators selected from data sources that can offer better universality, openness, and dynamics, and is more sen- sitive and precise.展开更多
The rapid increase in the publication of knowledge bases as linked open data (LOD) warrants serious consideration from all concerned, as this phenomenon will potentially scale exponentially. This paper will briefly ...The rapid increase in the publication of knowledge bases as linked open data (LOD) warrants serious consideration from all concerned, as this phenomenon will potentially scale exponentially. This paper will briefly describe the evolution of the LOD, the emerging world-wide semantic web (WWSW), and explore the scalability and performance features Of the service oriented architecture that forms the foundation of the semantic technology platform developed at MIMOS Bhd., for addressing the challenges posed by the intelligent future internet. This paper" concludes with a review of the current status of the agriculture linked open data.展开更多
The technological landscape for managing big Earth observation(EO)data ranges from global solutions on large cloud infrastructures with web-based access to self-hosted implementations.EO data cubes are a leading techn...The technological landscape for managing big Earth observation(EO)data ranges from global solutions on large cloud infrastructures with web-based access to self-hosted implementations.EO data cubes are a leading technology for facilitating big EO data analysis and can be deployed on different spatial scales:local,national,regional,or global.Several EO data cubes with a geographic focus(“local EO data cubes”)have been implemented.However,their alignment with the Digital Earth(DE)vision and the benefits and trade-offs in creating and maintaining them ought to be further examined.We investigate local EO data cubes from five perspectives(science,business and industry,government and policy,education,communities and citizens)and illustrate four examples covering three continents at different geographic scales(Swiss Data Cube,semantic EO data cube for Austria,DE Africa,Virginia Data Cube).A local EO data cube can benefit many stakeholders and players but requires several technical developments.These developments include enabling local EO data cubes based on public,global,and cloud-native EO data streaming and interoperability between local EO data cubes.We argue that blurring the dichotomy between global and local aligns with the DE vision to access the world’s knowledge and explore information about the planet.展开更多
The Semantic Web seems finally close to maintaining its promise about a real world-wide graph of interconnected resources. The SPARQL query language and protocols and the Linked Open Data initiative have laid the way ...The Semantic Web seems finally close to maintaining its promise about a real world-wide graph of interconnected resources. The SPARQL query language and protocols and the Linked Open Data initiative have laid the way for endless data endpoints sparse around the globe. However, for the Semantic Web to really happen, it does not suffice to get billions of triples out there: these must be shareable, interlinked and conform to widely accepted vocabularies. While more and more data are converted from already available large knowledge repositories of companies and organizations, the question whether these should be carefully converted to semantically consistent ontology vocabularies or find other shallow representations for their content naturally arises. The danger is to come up with massive amounts of useless data, a boomerang which could result to be contradictory for the success of the web of data. In this paper, I provide some insights on common problems which may arise when porting huge amount of existing data or conceptual schemes (very common in the agriculture domain) to resource description framwork (RDF), and will address different modeling choices, by discussing in particular the relationship between the two main modeling vocabularies offered by W3C: OWL and SKOS.展开更多
Cities are the most preferable dwelling places, having with better employment opportunities, educational hubs, medical services, recreational facilities, theme parks, and shopping malls etc. Cities are the driving for...Cities are the most preferable dwelling places, having with better employment opportunities, educational hubs, medical services, recreational facilities, theme parks, and shopping malls etc. Cities are the driving forces for any national economy too. Unfortunately now a days, these cities are producing circa 70% of pollutants, even though they only oeeupy 2% of surface of the Earth. Pub- lic utility services cannot meet the demands of unexpected growth. The filthiness in cities causing decreasing of Quality of Life. In this light our research paper is giving more concentration on necessity of " Smart Cities", which are the basis for civic centric services. This article is throwing light on Smart Cities and its important roles. The beauty of this manuscript is scribbling "Smart Cities" concepts in pictorially. Moreover this explains on "Barcelona Smart City" using lnternet of Things Technologies. It is a good example in urban paradigm shift. Braeelona is like the heaven on the earth with by providing Quality of Life to all urban citizens. The GOD is Interenet of Things.展开更多
Hazard maps are usually prepared for each disaster, including seismic hazard maps, flood hazard maps, and landslide hazard maps. However, when the general public attempts to check their own disaster risk, most are lik...Hazard maps are usually prepared for each disaster, including seismic hazard maps, flood hazard maps, and landslide hazard maps. However, when the general public attempts to check their own disaster risk, most are likely not aware of the specific types of disaster. So, first of all, we need to know what kind<span style="font-family:;" "="">s</span><span style="font-family:;" "=""> of hazards are important. However, the information that integrates multiple hazards is not well maintained, and there are few such studies. On the other hand, in Japan, a lot of hazard information is being released on the Internet. So, we summarized and assessed hazard data that can be accessed online regarding shelters (where evacuees live during disasters) and their catchments (areas assigned to each shelter) in Yokohama City, Kanagawa Prefecture. Based on the results, we investigated whether a grouping by cluster analysis would allow for multi-hazard assessment. We used four natural disasters (seismic, flood, tsunami, sediment disaster) and six parameters of other population and senior population. However, since the characteristics of the population and the senior population were almost the same, only population data was used in the final examination. From the cluster analysis, it was found that it is appropriate to group the designated evacuation centers in Yokohama City into six groups. In addition, each of the six groups was found <span>to have explainable characteristics, confirming the effectiveness of multi-hazard</span> creation using cluster analysis. For example, we divided, all hazards are low, both flood and Seismic hazards are high, sediment hazards are high, etc. In many Japanese cities, disaster prevention measures have been constructed in consideration of ground hazards, mainly for earthquake disasters. In this paper, we confirmed the consistency between the evaluation results of the multi-hazard evaluated here and the existing ground hazard map and examined the usefulness of the designated evacuation center. Finally, the validity was confirmed by comparing this result with the ground hazard based on the actual measurement by the past research. In places where the seismic hazard is large, the two are consistent with the fact that the easiness of shaking by actual measurement is also large.</span>展开更多
文摘Purpose: This paper aims to assess if the extent of openness and the coverage of data sets released by European governments have a significant impact on citizen trust in public institutions.Design/methodology/approach: Data for openness and coverage have been collected from the Open Data Inventory 2018(ODIN), by Open Data Watch;institutional trust is built up as a formative construct based on the European Social Survey(ESS), Round 9. The relations between the open government data features and trust have been tested on the basis of structural equation modelling(SEM).Findings: The paper reveals that as European governments improve data openness, disaggregation, and time coverage, people tend to trust them more. However, the size of the effect is still small and, comparatively, data coverage effect on citizens' confidence is more than twice than the impact of openness.Research limitations: This paper analyzes the causal effect of Open Government Data(OGD) features captured in a certain moment of time. In upcoming years, as OGD is implemented and a more consistent effect on people is expected, time series analysis will provide with a deeper insight.Practical implications: Public officers should continue working in the development of a technological framework that contributes to make OGD truly open. They should improve the added value of the increasing amount of open data currently available in order to boost internal and external innovations valuable both for public agencies and citizens.Originality/value: In a field of knowledge with little quantitative empirical evidence, this paper provides updated support for the positive effect of OGD strategies and it also points out areas of improvement in terms of the value that citizens can get from OGD coverage and openness.
文摘Lane change prediction is critical for crash avoidance but challenging as it requires the understanding of the instantaneous driving environment.With cutting-edge artificial intelligence and sensing technologies,autonomous vehicles(AVs)are expected to have exceptional perception systems to capture instantaneously their driving environments for predicting lane changes.By exploring the Waymo open motion dataset,this study proposes a framework to explore autonomous driving data and investigate lane change behaviors.In the framework,this study develops a Long Short-Term Memory(LSTM)model to predict lane changing behaviors.The concept of Vehicle Operating Space(VOS)is introduced to quantify a vehicle's instantaneous driving environment as an important indicator used to predict vehicle lane changes.To examine the robustness of the model,a series of sensitivity analysis are conducted by varying the feature selection,prediction horizon,and training data balancing ratios.The test results show that including VOS into modeling can speed up the loss decay in the training process and lead to higher accuracy and recall for predicting lane-change behaviors.This study offers an example along with a methodological framework for transportation researchers to use emerging autonomous driving data to investigate driving behaviors and traffic environments.
文摘This paper aims to present the experience gathered in the Italian alpine city of Bolzano within the project“Bolzano Traffic”whose goal is the introduction of an experimental open ITS platform for local service providers,fostering the diffusion of advanced traveller information services and the future deployment of cooperative mobility systems in the region.Several end-users applications targeted to the needs of different user groups have been developed in collaboration with local companies and research centers;a partnership with the EU Co-Cities project has been activated as well.The implemented services rely on real-time travel and traffic information collected by urban traffic monitoring systems or published by local stakeholders(e.g.public transportation operators).An active involvement of end-users,who have recently started testing these demo applications for free,is actually on-going.
基金College of Communication and Information(CCI)Research and Creative Activity Fund,Kent State University
文摘Purpose:To develop a set of metrics and identify criteria for assessing the functionality of LOD KOS products while providing common guiding principles that can be used by LOD KOS producers and users to maximize the functions and usages of LOD KOS products.Design/methodology/approach:Data collection and analysis were conducted at three time periods in 2015–16,2017 and 2019.The sample data used in the comprehensive data analysis comprises all datasets tagged as types of KOS in the Datahub and extracted through their respective SPARQL endpoints.A comparative study of the LOD KOS collected from terminology services Linked Open Vocabularies(LOV)and BioPortal was also performed.Findings:The study proposes a set of Functional,Impactful and Transformable(FIT)metrics for LOD KOS as value vocabularies.The FAIR principles,with additional recommendations,are presented for LOD KOS as open data.Research limitations:The metrics need to be further tested and aligned with the best practices and international standards of both open data and various types of KOS.Practical implications:Assessment performed with FAIR and FIT metrics support the creation and delivery of user-friendly,discoverable and interoperable LOD KOS datasets which can be used for innovative applications,act as a knowledge base,become a foundation of semantic analysis and entity extractions and enhance research in science and the humanities.Originality/value:Our research provides best practice guidelines for LOD KOS as value vocabularies.
文摘Purpose: Our work seeks to overcome data quality issues related to incomplete author affiliation data in bibliographic records in order to support accurate and reliable measurement of international research collaboration(IRC).Design/methodology/approch: We propose, implement, and evaluate a method that leverages the Web-based knowledge graph Wikidata to resolve publication affiliation data to particular countries. The method is tested with general and domain-specific data sets.Findings: Our evaluation covers the magnitude of improvement, accuracy, and consistency. Results suggest the method is beneficial, reliable, and consistent, and thus a viable and improved approach to measuring IRC.Research limitations: Though our evaluation suggests the method works with both general and domain-specific bibliographic data sets, it may perform differently with data sets not tested here. Further limitations stem from the use of the R programming language and R libraries for country identification as well as imbalanced data coverage and quality in Wikidata that may also change over time.Practical implications: The new method helps to increase the accuracy in IRC studies and provides a basis for further development into a general tool that enriches bibliographic data using the Wikidata knowledge graph.Originality: This is the first attempt to enrich bibliographic data using a peer-produced, Webbased knowledge graph like Wikidata.
文摘Purpose: The purpose of this exploratory study is to provide modern local governments with potential use cases for their open data, in order to help inform related future policies and decision-making. The concrete context was that of the Vaxjo municipality located in southeastern Sweden.Design/methodology/approach: The methodology was two-fold: 1) a survey of potential end users(n=151) from a local university;and, 2) analysis of survey results using a theoretical model regarding local strategies for implementing open government data.Findings: Most datasets predicted to be useful were on: sustainability and environment;preschool and school;municipality and politics. The use context given is primarily research and development, informing policies and decision making;but also education, informing personal choices, informing citizens and creating services based on open data. Not the least, the need for educating target user groups on data literacy emerged. A tentative pattern comprising a technical perspective on open data and a social perspective on open government was identified. Research limitations: In line with available funding, the nature of the study was exploratory and implemented as an anonymous web-based survey of employees and students at the local university. Further research involving(qualitative) surveys with all stakeholders would allow for creating a more complete picture of the matter. Practical implications: The study determines potential use cases and use contexts for open government data, in order to help inform related future policies and decision-making.Originality/value: Modern local governments, and especially in Sweden, are faced with a challenge of how to make their data open, how to learn about which types of data will be most relevant for their end users and what will be different societal purposes. The paper contributes to knowledge that modern local governments can resort to when it comes to attitudes of local citizens to open government data in the context of an open government data perspective.
文摘Systematically analyze the composition of post-marketing adverse drug reaction data and the open mode in the EU, and summarize its characteristics. EU post-marketing ADR data is open to six categories of stakeholders: EMA, EC, medicines regulatory authorities in EEA member states, healthcare professionals and the public, Marketing Authorization Holders, academia, WHO and medicines regulatory authorities in third countries. The EU has implemented hierarchical opening for ADRs, with different levels containing different data and facing different stakeholders. Openness is divided into active and passive openness. In opening up data, the EU complies with relevant personal data protection laws to protect the privacy of individuals. The EU’s post-marketing adverse drug reaction data openness is characterized by a combination of data openness and privacy protection, active and passive openness, and a hierarchy of data openness. It is hoped that this can provide a reference for the opening up of post-marketing adverse drug reaction data in China.
文摘This research describes a quantitative,rapid,and low-cost methodology for debris flow susceptibility evaluation at the basin scale using open-access data and geodatabases.The proposed approach can aid decision makers in land management and territorial planning,by first screening for areas with a higher debris flow susceptibility.Five environmental predisposing factors,namely,bedrock lithology,fracture network,quaternary deposits,slope inclination,and hydrographic network,were selected as independent parameters and their mutual interactions were described and quantified using the Rock Engineering System(RES)methodology.For each parameter,specific indexes were proposed,aiming to provide a final synthetic and representative index of debris flow susceptibility at the basin scale.The methodology was tested in four basins located in the Upper Susa Valley(NW Italian Alps)where debris flow events are the predominant natural hazard.The proposed matrix can represent a useful standardized tool,universally applicable,since it is independent of type and characteristic of the basin.
文摘In recent years, transparency and accountability seem to find new impulse, with the development of ICT (information and communication technology) and the prospective of open data that invest the public system at a national and supranational level. Public institutions tend to make available to the public, more data and information concerning the administration, the manner of use of public goods and resources. At the same time, each institution is called upon to deal with the demand of transparency and participation by citizens who increasingly use Internet 2.0 and social media. After a reflection on how public administrations acted in the phase of Web 1.0 to practice transparency and accountability in terms of communication, this paper considers the elements of continuity and the new opportunities linked to the advent of Web 2.0 and open data. At the end of this analysis, the focus is on the strengths and weaknesses of this process, with a particular attention to the role of the public communication.
文摘The rising awareness of environmental issues and the increase of renewable energy sources(RESs)has led to a shift in energy production toward RES,such as photovoltaic(PV)systems,and toward a distributed generation(DG)model of energy production that requires systems in which energy is generated,stored,and consumed locally.In this work,we present a methodology that integrates geographic information system(GIS)-based PV potential assessment procedures with models for the estimation of both energy generation and consumption profiles.In particular,we have created an innovative infrastructure that co-simulates PV integration on building rooftops together with an analysis of households’electricity demand.Our model relies on high spatiotemporal resolution and considers both shadowing effects and real-sky conditions for solar radiation estimation.It integrates methodologies to estimate energy demand with a high temporal resolution,accounting for realistic populations with realistic consumption profiles.Such a solution enables concrete recommendations to be drawn in order to promote an understanding of urban energy systems and the integration of RES in the context of future smart cities.The proposed methodology is tested and validated within the municipality of Turin,Italy.For the whole municipality,we estimate both the electricity absorbed from the residential sector(simulating a realistic population)and the electrical energy that could be produced by installing PV systems on buildings’rooftops(considering two different scenarios,with the former using only the rooftops of residential buildings and the latter using all available rooftops).The capabilities of the platform are explored through an in-depth analysis of the obtained results.Generated power and energy profiles are presented,emphasizing the flexibility of the resolution of the spatial and temporal results.Additional energy indicators are presented for the self-consumption of produced energy and the avoidance of CO_(2) emissions.
文摘Lithium-ion batteries are key drivers of the renewable energy revolution,bolstered by progress in battery design,modelling,and management.Yet,achieving high-performance battery health prognostics is a significant challenge.With the availability of open data and software,coupled with automated simulations,deep learning has become an integral component of battery health prognostics.We offer a comprehensive overview of potential deep learning techniques specifically designed for modeling and forecasting the dynamics of multiphysics and multiscale battery systems.Following this,we provide a concise summary of publicly available lithium-ion battery test and cycle datasets.By providing illustrative examples,we emphasize the efficacy of five techniques capable of enhancing deep learning for accurate battery state prediction and health-focused management.Each of these techniques offers unique benefits.(1)Transformer models address challenges using self-attention mechanisms and positional encoding methods.(2) Transfer learning improves learning tasks within a target domain by leveraging knowledge from a source domain.(3) Physics-informed learning uses prior knowledge to enhance learning algorithms.(4)Generative adversarial networks(GANs) earn praise for their ability to generate diverse and high-quality outputs,exhibiting outstanding performance with complex datasets.(5) Deep reinforcement learning enables an agent to make optimal decisions through continuous interactions with its environment,thus maximizing cumulative rewards.In this Review,we highlight examples that employ these techniques for battery health prognostics,summarizing both their challenges and opportunities.These methodologies offer promising prospects for researchers and industry professionals,enabling the creation of specialized network architectures that autonomously extract features,especially for long-range spatial-temporal connections across extended timescales.The outcomes could include improved accuracy,faster training,and enhanced generalization.
基金the Center for Theoretical Physics (CTP) at the British University in Egypt (BUE) for its continuous support,both financially and scientifically,for this work。
文摘In this study,we conducted a search for dark matter using a part of the data recorded by the CMS experiment during run-I of the LHC in 2012 with a center of mass energy of 8 TeV and an integrated luminosity of 11.6 fb−1.These data were gathered from the CMS open data.Dark matter,in the framework of the simplified model(mono-Z′),can be produced from proton-proton collisions in association with a new hypothetical gauge boson,Z′.Thus,the search was conducted in the dimuon plus large missing transverse momentum channel.One benchmark scenario of mono-Z′,which is known as light vector,was used for interpreting the CMS open data.No evidence of dark matter was observed,and exclusion limits were set on the masses of dark matter and Z′at 95%confidence level.
基金supported by the Stanford Diabetes Research Center(no.P30DK116074)and mentored by SPARK Translational Research Program,Stanford University.
文摘The"omics"revolution has transformed the biomedical research landscape by equipping scientists with the ability to interrogate complex biological phenomenon and disease processes at an unprecedented level.The volume of"big"data generated by the different omics studies such as genomics,transcriptomics,proteomics,and metabolomics has led to the concurrent development of computational tools to enable in silico analysis and aid data deconvolution.Considering the intensive resources and high costs required to generate and analyze big data,there has been centralized,collaborative efforts to make the data and analysis tools freely available as"Open Source,"to benefit the wider research community.Pancreatology research studies have contributed to this"big data rush"and have additionally benefitted from utilizing the open source data as evidenced by the increasing number of new research findings and publications that stem from such data.In this review,we briefly introduce the evolution of open source omics data,data types,the"FAIR"guiding principles for data management and reuse,and centralized platforms that enable free and fair data accessibility,availability,and provide tools for omics data analysis.We illustrate,through the case study of our own experience in mining pancreatitis omics data,the power of repurposing open source data to answer translationally relevant questions in pancreas research.
基金supported by the U.S.National Science Foundation(Grant No.2126315).
文摘Data exploration,usually the first step in data analysis,is a useful method to tackle challenges caused by big geoscience data.It conducts quick analysis of data,investigates the patterns,and generates/refines research questions to guide advanced statistics and machine learning algorithms.The background of this work is the open mineral data provided by several sources,and the focus is different types of associations in mineral properties and occurrences.Researchers in mineralogy have been applying different techniques for exploring such associations.Although the explored associations can lead to new scientific insights that contribute to crystallography,mineralogy,and geochemistry,the exploration process is often daunting due to the wide range and complexity of factors involved.In this study,our purpose is implementing a visualization tool based on the adjacency matrix for a variety of datasets and testing its utility for quick exploration of association patterns in mineral data.Algorithms,software packages,and use cases have been developed to process a variety of mineral data.The results demonstrate the efficiency of adjacency matrix in real-world usage.All the developed works of this study are open source and open access.
文摘After a systematic review of 38 current intelligent city evaluation systems (ICESs) from around the world, this research analyzes the secondary and tertiary indicators of these 38 ICESs from the perspec- tives of scale structuring, approaches and indicator selection, and determines their common base. From this base, the fundamentals of the City Intelligence Quotient (City IOD Evaluation System are developed and five dimensions are selected after a clustering analysis. The basic version, City IQ Evaluation System 1.0, involves 275 experts from 14 high-end research institutions, which include the Chinese Academy of Engineering, the National Academy of Science and Engineering (Germany), the Royal Swedish Academy of Engineering Sciences, the Planning Management Center of the Ministry of Housing and Urban-Rural Development of China, and the Development Research Center of the State Council of China. City IQ Evaluation System 2.0 is further developed, with improvements in its universality, openness, and dy- namic adjustment capability. After employing deviation evaluation methods in the IQ assessment, City IQ Evaluation System 3.0 was conceived. The research team has conducted a repeated assessment of 41 intelligent cities around the world using City IQ Evaluation System 3.0. The results have proved that the City IQ Evaluation System, developed on the basis of intelligent life, features more rational indicators selected from data sources that can offer better universality, openness, and dynamics, and is more sen- sitive and precise.
文摘The rapid increase in the publication of knowledge bases as linked open data (LOD) warrants serious consideration from all concerned, as this phenomenon will potentially scale exponentially. This paper will briefly describe the evolution of the LOD, the emerging world-wide semantic web (WWSW), and explore the scalability and performance features Of the service oriented architecture that forms the foundation of the semantic technology platform developed at MIMOS Bhd., for addressing the challenges posed by the intelligent future internet. This paper" concludes with a review of the current status of the agriculture linked open data.
基金the Austrian Research Promotion Agency(FFG)under the Austrian Space Application Programme(ASAP)within the projects Sen2Cube.at(project no.:866016)SemantiX(project no.:878939)SIMS(project no.:885365).
文摘The technological landscape for managing big Earth observation(EO)data ranges from global solutions on large cloud infrastructures with web-based access to self-hosted implementations.EO data cubes are a leading technology for facilitating big EO data analysis and can be deployed on different spatial scales:local,national,regional,or global.Several EO data cubes with a geographic focus(“local EO data cubes”)have been implemented.However,their alignment with the Digital Earth(DE)vision and the benefits and trade-offs in creating and maintaining them ought to be further examined.We investigate local EO data cubes from five perspectives(science,business and industry,government and policy,education,communities and citizens)and illustrate four examples covering three continents at different geographic scales(Swiss Data Cube,semantic EO data cube for Austria,DE Africa,Virginia Data Cube).A local EO data cube can benefit many stakeholders and players but requires several technical developments.These developments include enabling local EO data cubes based on public,global,and cloud-native EO data streaming and interoperability between local EO data cubes.We argue that blurring the dichotomy between global and local aligns with the DE vision to access the world’s knowledge and explore information about the planet.
文摘The Semantic Web seems finally close to maintaining its promise about a real world-wide graph of interconnected resources. The SPARQL query language and protocols and the Linked Open Data initiative have laid the way for endless data endpoints sparse around the globe. However, for the Semantic Web to really happen, it does not suffice to get billions of triples out there: these must be shareable, interlinked and conform to widely accepted vocabularies. While more and more data are converted from already available large knowledge repositories of companies and organizations, the question whether these should be carefully converted to semantically consistent ontology vocabularies or find other shallow representations for their content naturally arises. The danger is to come up with massive amounts of useless data, a boomerang which could result to be contradictory for the success of the web of data. In this paper, I provide some insights on common problems which may arise when porting huge amount of existing data or conceptual schemes (very common in the agriculture domain) to resource description framwork (RDF), and will address different modeling choices, by discussing in particular the relationship between the two main modeling vocabularies offered by W3C: OWL and SKOS.
基金The financial support is fully funding by Ministry of Human Resource Development(MHRD)
文摘Cities are the most preferable dwelling places, having with better employment opportunities, educational hubs, medical services, recreational facilities, theme parks, and shopping malls etc. Cities are the driving forces for any national economy too. Unfortunately now a days, these cities are producing circa 70% of pollutants, even though they only oeeupy 2% of surface of the Earth. Pub- lic utility services cannot meet the demands of unexpected growth. The filthiness in cities causing decreasing of Quality of Life. In this light our research paper is giving more concentration on necessity of " Smart Cities", which are the basis for civic centric services. This article is throwing light on Smart Cities and its important roles. The beauty of this manuscript is scribbling "Smart Cities" concepts in pictorially. Moreover this explains on "Barcelona Smart City" using lnternet of Things Technologies. It is a good example in urban paradigm shift. Braeelona is like the heaven on the earth with by providing Quality of Life to all urban citizens. The GOD is Interenet of Things.
文摘Hazard maps are usually prepared for each disaster, including seismic hazard maps, flood hazard maps, and landslide hazard maps. However, when the general public attempts to check their own disaster risk, most are likely not aware of the specific types of disaster. So, first of all, we need to know what kind<span style="font-family:;" "="">s</span><span style="font-family:;" "=""> of hazards are important. However, the information that integrates multiple hazards is not well maintained, and there are few such studies. On the other hand, in Japan, a lot of hazard information is being released on the Internet. So, we summarized and assessed hazard data that can be accessed online regarding shelters (where evacuees live during disasters) and their catchments (areas assigned to each shelter) in Yokohama City, Kanagawa Prefecture. Based on the results, we investigated whether a grouping by cluster analysis would allow for multi-hazard assessment. We used four natural disasters (seismic, flood, tsunami, sediment disaster) and six parameters of other population and senior population. However, since the characteristics of the population and the senior population were almost the same, only population data was used in the final examination. From the cluster analysis, it was found that it is appropriate to group the designated evacuation centers in Yokohama City into six groups. In addition, each of the six groups was found <span>to have explainable characteristics, confirming the effectiveness of multi-hazard</span> creation using cluster analysis. For example, we divided, all hazards are low, both flood and Seismic hazards are high, sediment hazards are high, etc. In many Japanese cities, disaster prevention measures have been constructed in consideration of ground hazards, mainly for earthquake disasters. In this paper, we confirmed the consistency between the evaluation results of the multi-hazard evaluated here and the existing ground hazard map and examined the usefulness of the designated evacuation center. Finally, the validity was confirmed by comparing this result with the ground hazard based on the actual measurement by the past research. In places where the seismic hazard is large, the two are consistent with the fact that the easiness of shaking by actual measurement is also large.</span>