After a systematic review of 38 current intelligent city evaluation systems (ICESs) from around the world, this research analyzes the secondary and tertiary indicators of these 38 ICESs from the perspec- tives of sc...After a systematic review of 38 current intelligent city evaluation systems (ICESs) from around the world, this research analyzes the secondary and tertiary indicators of these 38 ICESs from the perspec- tives of scale structuring, approaches and indicator selection, and determines their common base. From this base, the fundamentals of the City Intelligence Quotient (City IOD Evaluation System are developed and five dimensions are selected after a clustering analysis. The basic version, City IQ Evaluation System 1.0, involves 275 experts from 14 high-end research institutions, which include the Chinese Academy of Engineering, the National Academy of Science and Engineering (Germany), the Royal Swedish Academy of Engineering Sciences, the Planning Management Center of the Ministry of Housing and Urban-Rural Development of China, and the Development Research Center of the State Council of China. City IQ Evaluation System 2.0 is further developed, with improvements in its universality, openness, and dy- namic adjustment capability. After employing deviation evaluation methods in the IQ assessment, City IQ Evaluation System 3.0 was conceived. The research team has conducted a repeated assessment of 41 intelligent cities around the world using City IQ Evaluation System 3.0. The results have proved that the City IQ Evaluation System, developed on the basis of intelligent life, features more rational indicators selected from data sources that can offer better universality, openness, and dynamics, and is more sen- sitive and precise.展开更多
Open source intelligence is one of the most important public data sources for strategic information analysis. One of the primary and core issues of strategic information research is information perception,so this pape...Open source intelligence is one of the most important public data sources for strategic information analysis. One of the primary and core issues of strategic information research is information perception,so this paper mainly expounds the perception method for strategic information perception in the open source intelligence environment as well as the framework and basic process of information perception. This paper argues that in order to match the information perception result with the information depiction result,it conducts practical exploration for the results of information acquisition,perception,depiction and analysis. This paper introduces and develops a monitoring platform for information perception. The results show that the method proposed in this paper is feasible.展开更多
Artificial intelligence(AI)tools,like OpenAI's Chat Generative Pre-trained Transformer(ChatGPT),hold considerable potential in healthcare,academia,and diverse industries.Evidence demonstrates its capability at a m...Artificial intelligence(AI)tools,like OpenAI's Chat Generative Pre-trained Transformer(ChatGPT),hold considerable potential in healthcare,academia,and diverse industries.Evidence demonstrates its capability at a medical student level in standardized tests,suggesting utility in medical education,radiology reporting,genetics research,data optimization,and drafting repetitive texts such as discharge summaries.Nevertheless,these tools should augment,not supplant,human expertise.Despite promising applications,ChatGPT confronts limitations,including critical thinking tasks and generating false references,necessitating stringent cross-verification.Ensuing concerns,such as potential misuse,bias,blind trust,and privacy,underscore the need for transparency,accountability,and clear policies.Evaluations of AI-generated content and preservation of academic integrity are critical.With responsible use,AI can significantly improve healthcare,academia,and industry without compromising integrity and research quality.For effective and ethical AI deployment,collaboration amongst AI developers,researchers,educators,and policymakers is vital.The development of domain-specific tools,guidelines,regulations,and the facilitation of public dialogue must underpin these endeavors to responsibly harness AI's potential.展开更多
Law enforcement agencies have a restricted area in which their powers apply,which is called their jurisdiction.These restrictions also apply to the Internet.However,on the Internet,the physical borders of the jurisdic...Law enforcement agencies have a restricted area in which their powers apply,which is called their jurisdiction.These restrictions also apply to the Internet.However,on the Internet,the physical borders of the jurisdiction,typically country borders,are hard to discover.In our case,it is hard to establish whether someone involved in criminal online behavior is indeed a Dutch citizen.We propose a way to overcome the arduous task of manually investigating whether a user on an Internet forum is Dutch or not.More precisely,we aim to detect that a given English text is written by a Dutch native author.To develop a detector,we follow a machine learning approach.Therefore,we need to prepare a specific training corpus.To obtain a corpus that is representative for online forums,we collected a large amount of English forum posts from Dutch and non-Dutch authors on Reddit.To learn a detection model,we used a bag-of-words representation to capture potential misspellings,grammatical errors or unusual turns of phrases that are characteristic of the mother tongue of the authors.For this learning task,we compare the linear support vector machine and regularized logistic regression using the appropriate performance metrics f1 score,precision,and average precision.Our results show logistic regression with frequency-based feature selection performs best at predicting Dutch natives.Further study should be directed to the general applicability of the results that is to find out if the developed models are applicable to other forums with comparable high performance.展开更多
文摘After a systematic review of 38 current intelligent city evaluation systems (ICESs) from around the world, this research analyzes the secondary and tertiary indicators of these 38 ICESs from the perspec- tives of scale structuring, approaches and indicator selection, and determines their common base. From this base, the fundamentals of the City Intelligence Quotient (City IOD Evaluation System are developed and five dimensions are selected after a clustering analysis. The basic version, City IQ Evaluation System 1.0, involves 275 experts from 14 high-end research institutions, which include the Chinese Academy of Engineering, the National Academy of Science and Engineering (Germany), the Royal Swedish Academy of Engineering Sciences, the Planning Management Center of the Ministry of Housing and Urban-Rural Development of China, and the Development Research Center of the State Council of China. City IQ Evaluation System 2.0 is further developed, with improvements in its universality, openness, and dy- namic adjustment capability. After employing deviation evaluation methods in the IQ assessment, City IQ Evaluation System 3.0 was conceived. The research team has conducted a repeated assessment of 41 intelligent cities around the world using City IQ Evaluation System 3.0. The results have proved that the City IQ Evaluation System, developed on the basis of intelligent life, features more rational indicators selected from data sources that can offer better universality, openness, and dynamics, and is more sen- sitive and precise.
基金Supported by the National Social Science Fund Project(No.18BTQ054)
文摘Open source intelligence is one of the most important public data sources for strategic information analysis. One of the primary and core issues of strategic information research is information perception,so this paper mainly expounds the perception method for strategic information perception in the open source intelligence environment as well as the framework and basic process of information perception. This paper argues that in order to match the information perception result with the information depiction result,it conducts practical exploration for the results of information acquisition,perception,depiction and analysis. This paper introduces and develops a monitoring platform for information perception. The results show that the method proposed in this paper is feasible.
文摘Artificial intelligence(AI)tools,like OpenAI's Chat Generative Pre-trained Transformer(ChatGPT),hold considerable potential in healthcare,academia,and diverse industries.Evidence demonstrates its capability at a medical student level in standardized tests,suggesting utility in medical education,radiology reporting,genetics research,data optimization,and drafting repetitive texts such as discharge summaries.Nevertheless,these tools should augment,not supplant,human expertise.Despite promising applications,ChatGPT confronts limitations,including critical thinking tasks and generating false references,necessitating stringent cross-verification.Ensuing concerns,such as potential misuse,bias,blind trust,and privacy,underscore the need for transparency,accountability,and clear policies.Evaluations of AI-generated content and preservation of academic integrity are critical.With responsible use,AI can significantly improve healthcare,academia,and industry without compromising integrity and research quality.For effective and ethical AI deployment,collaboration amongst AI developers,researchers,educators,and policymakers is vital.The development of domain-specific tools,guidelines,regulations,and the facilitation of public dialogue must underpin these endeavors to responsibly harness AI's potential.
文摘Law enforcement agencies have a restricted area in which their powers apply,which is called their jurisdiction.These restrictions also apply to the Internet.However,on the Internet,the physical borders of the jurisdiction,typically country borders,are hard to discover.In our case,it is hard to establish whether someone involved in criminal online behavior is indeed a Dutch citizen.We propose a way to overcome the arduous task of manually investigating whether a user on an Internet forum is Dutch or not.More precisely,we aim to detect that a given English text is written by a Dutch native author.To develop a detector,we follow a machine learning approach.Therefore,we need to prepare a specific training corpus.To obtain a corpus that is representative for online forums,we collected a large amount of English forum posts from Dutch and non-Dutch authors on Reddit.To learn a detection model,we used a bag-of-words representation to capture potential misspellings,grammatical errors or unusual turns of phrases that are characteristic of the mother tongue of the authors.For this learning task,we compare the linear support vector machine and regularized logistic regression using the appropriate performance metrics f1 score,precision,and average precision.Our results show logistic regression with frequency-based feature selection performs best at predicting Dutch natives.Further study should be directed to the general applicability of the results that is to find out if the developed models are applicable to other forums with comparable high performance.