The Internet now is a large-scale platform with big data. Finding truth from a huge dataset has attracted extensive attention, which can maintain the quality of data collected by users and provide users with accurate ...The Internet now is a large-scale platform with big data. Finding truth from a huge dataset has attracted extensive attention, which can maintain the quality of data collected by users and provide users with accurate and efficient data. However, current truth finder algorithms are unsatisfying, because of their low accuracy and complication. This paper proposes a truth finder algorithm based on entity attributes (TFAEA). Based on the iterative computation of source reliability and fact accuracy, TFAEA considers the interactive degree among facts and the degree of dependence among sources, to simplify the typical truth finder algorithms. In order to improve the accuracy of them, TFAEA combines the one-way text similarity and the factual conflict to calculate the mutual support degree among facts. Furthermore, TFAEA utilizes the symmetric saturation of data sources to calculate the degree of dependence among sources. The experimental results show that TFAEA is not only more stable, but also more accurate than the typical truth finder algorithms.展开更多
With the rocketing progress of the Internet, it is easier for people to get information about the objects that they are interested in. However, this information usually has conflicts. In order to resolve conflicts and...With the rocketing progress of the Internet, it is easier for people to get information about the objects that they are interested in. However, this information usually has conflicts. In order to resolve conflicts and get the true information, truth discovery has been proposed and received widespread attention. Many algorithms have been proposed to adapt to different scenarios. This paper aims to investigate these algorithms and summarize them from the perspective of algorithm models and specific concepts. Some classic datasets and evaluation metrics are given in this paper. Some future directions for readers are also provided to better understand the field of truth discovery.展开更多
With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The...With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The development of software defined networks has brought new opportunities and challenges to future networks. The data and control separation characteristics of SDN improve the performance of the entire network. Researchers have integrated SDN architecture into data centers to improve network resource utilization and performance. This paper first introduces the basic concepts of SDN and data center networks. Then it discusses SDN-based load balancing mechanisms for data centers from different perspectives. Finally, it summarizes and looks forward to the study on SDN-based load balancing mechanisms and its development trend.展开更多
With the extensive application of software collaborative development technology,the processing of code data generated in programming scenes has become a research hotspot.In the collaborative programming process,differ...With the extensive application of software collaborative development technology,the processing of code data generated in programming scenes has become a research hotspot.In the collaborative programming process,different users can submit code in a distributed way.The consistency of code grammar can be achieved by syntax constraints.However,when different users work on the same code in semantic development programming practices,the development factors of different users will inevitably lead to the problem of data semantic conflict.In this paper,the characteristics of code segment data in a programming scene are considered.The code sequence can be obtained by disassembling the code segment using lexical analysis technology.Combined with a traditional solution of a data conflict problem,the code sequence can be taken as the declared value object in the data conflict resolution problem.Through the similarity analysis of code sequence objects,the concept of the deviation degree between the declared value object and the truth value object is proposed.A multi-truth discovery algorithm,called the multiple truth discovery algorithm based on deviation(MTDD),is proposed.The basic methods,such as Conflict Resolution on Heterogeneous Data,Voting-K,and MTRuths_Greedy,are compared to verify the performance and precision of the proposed MTDD algorithm.展开更多
【目的】对城市交叉口采用的左转非机动车信号灯设施进行交通安全性量化评估。【方法】提出一种基于拓展碰撞时间(extended time to collision,ETTC)指标的左转非机动车信号灯安全效应评估方法。针对现有的碰撞时间(time to collision,T...【目的】对城市交叉口采用的左转非机动车信号灯设施进行交通安全性量化评估。【方法】提出一种基于拓展碰撞时间(extended time to collision,ETTC)指标的左转非机动车信号灯安全效应评估方法。针对现有的碰撞时间(time to collision,TTC)指标不适于评估交叉口左转非机动车冲突的问题,考虑非机动车车辆尺寸与加速度对交通冲突的影响,采用拓展碰撞时间指标,评估交叉口非机动车交通冲突。收集长沙市4个信号交叉口的视频大数据,利用视频软件Tracker提取车辆微观轨迹后,开展案例分析。【结果】左转非机动车信号灯在时间上明确了非机动车的通行权,其设置能显著降低非机动车冲突率,在平峰、高峰时段非机动车冲突率分别降低了40.11%、25.27%。在直行相位末期、左转相位即将启亮时,设置组的左转非机动车在待行区等待,冲突率降为0;而对比组近50%的非机动车违规左转,冲突严重。设置左转非机动车信号灯的改善效果随非机动车流量的增大呈先增加后降低趋势,而随机动车流量的增大呈逐步波动下降趋势。【结论】本研究揭示了非机动车左转信号灯的设置对减少交叉口交通冲突的影响,可为城市交叉口非机动车交通安全管控提供有益参考。展开更多
为定量识别城市非信控环形交叉口区域内的机动车冲突风险易发生点,降低环形交叉口的事故发生率,本文构建针对非信控环形交叉口机动车冲突风险识别模型。首先,利用无人机采集高精度、连续的多车辆轨迹视频,结合Kinovea视频运动分析软件...为定量识别城市非信控环形交叉口区域内的机动车冲突风险易发生点,降低环形交叉口的事故发生率,本文构建针对非信控环形交叉口机动车冲突风险识别模型。首先,利用无人机采集高精度、连续的多车辆轨迹视频,结合Kinovea视频运动分析软件实现运行车辆状态识别与跟踪,并记录车辆每一帧的运动数据;其次,基于交通冲突识别指标TTC(Time to Collision),提出适应环形交叉口道路线形特征的车辆TTC计算方法,并使用累计频率法确定严重、一般和轻微冲突的阈值分别为1.2,2.8,4.4 s;最后,通过绘制高峰和平峰交通冲突空间异步图,并结合交通冲突数和严重冲突率,对环形交叉口的36个子区段进行交通冲突风险等级评定。研究结果显示:在高峰时段,某一子区段的平均交通冲突发生次数约为15次,严重冲突率为17.45%;在平峰时段,某一子区段的平均交通冲突发生次数约为8次,严重冲突率为8.28%。重度风险区域在高峰时段占比达到50%,而在平峰时段为8.33%,这些重度风险区域主要集中在交织区段。因此,环形交叉口在高峰时段且位于交织区段的情况更易发生交通事故。本文研究成果有助于交通管理部门了解环形交叉口在不同时段和区段上的交通冲突情况和特征,以便采取相应的预警和管理措施。展开更多
基金supported by the National Natural Science Foundation of China(61472192)the Scientific and Technological Support Project(Society)of Jiangsu Province(BE2016776)
文摘The Internet now is a large-scale platform with big data. Finding truth from a huge dataset has attracted extensive attention, which can maintain the quality of data collected by users and provide users with accurate and efficient data. However, current truth finder algorithms are unsatisfying, because of their low accuracy and complication. This paper proposes a truth finder algorithm based on entity attributes (TFAEA). Based on the iterative computation of source reliability and fact accuracy, TFAEA considers the interactive degree among facts and the degree of dependence among sources, to simplify the typical truth finder algorithms. In order to improve the accuracy of them, TFAEA combines the one-way text similarity and the factual conflict to calculate the mutual support degree among facts. Furthermore, TFAEA utilizes the symmetric saturation of data sources to calculate the degree of dependence among sources. The experimental results show that TFAEA is not only more stable, but also more accurate than the typical truth finder algorithms.
基金Fundamental Research Funds for the Central Universities,China (No. 22D111207)。
文摘With the rocketing progress of the Internet, it is easier for people to get information about the objects that they are interested in. However, this information usually has conflicts. In order to resolve conflicts and get the true information, truth discovery has been proposed and received widespread attention. Many algorithms have been proposed to adapt to different scenarios. This paper aims to investigate these algorithms and summarize them from the perspective of algorithm models and specific concepts. Some classic datasets and evaluation metrics are given in this paper. Some future directions for readers are also provided to better understand the field of truth discovery.
文摘With the continuous expansion of the data center network scale, changing network requirements, and increasing pressure on network bandwidth, the traditional network architecture can no longer meet people’s needs. The development of software defined networks has brought new opportunities and challenges to future networks. The data and control separation characteristics of SDN improve the performance of the entire network. Researchers have integrated SDN architecture into data centers to improve network resource utilization and performance. This paper first introduces the basic concepts of SDN and data center networks. Then it discusses SDN-based load balancing mechanisms for data centers from different perspectives. Finally, it summarizes and looks forward to the study on SDN-based load balancing mechanisms and its development trend.
基金supported by the National Key R&D Program of China(No.2018YFB1003905)the National Natural Science Foundation of China under Grant(No.61971032)Fundamental Research Funds for the Central Universities(No.FRF-TP-18-008A3).
文摘With the extensive application of software collaborative development technology,the processing of code data generated in programming scenes has become a research hotspot.In the collaborative programming process,different users can submit code in a distributed way.The consistency of code grammar can be achieved by syntax constraints.However,when different users work on the same code in semantic development programming practices,the development factors of different users will inevitably lead to the problem of data semantic conflict.In this paper,the characteristics of code segment data in a programming scene are considered.The code sequence can be obtained by disassembling the code segment using lexical analysis technology.Combined with a traditional solution of a data conflict problem,the code sequence can be taken as the declared value object in the data conflict resolution problem.Through the similarity analysis of code sequence objects,the concept of the deviation degree between the declared value object and the truth value object is proposed.A multi-truth discovery algorithm,called the multiple truth discovery algorithm based on deviation(MTDD),is proposed.The basic methods,such as Conflict Resolution on Heterogeneous Data,Voting-K,and MTRuths_Greedy,are compared to verify the performance and precision of the proposed MTDD algorithm.
文摘【目的】对城市交叉口采用的左转非机动车信号灯设施进行交通安全性量化评估。【方法】提出一种基于拓展碰撞时间(extended time to collision,ETTC)指标的左转非机动车信号灯安全效应评估方法。针对现有的碰撞时间(time to collision,TTC)指标不适于评估交叉口左转非机动车冲突的问题,考虑非机动车车辆尺寸与加速度对交通冲突的影响,采用拓展碰撞时间指标,评估交叉口非机动车交通冲突。收集长沙市4个信号交叉口的视频大数据,利用视频软件Tracker提取车辆微观轨迹后,开展案例分析。【结果】左转非机动车信号灯在时间上明确了非机动车的通行权,其设置能显著降低非机动车冲突率,在平峰、高峰时段非机动车冲突率分别降低了40.11%、25.27%。在直行相位末期、左转相位即将启亮时,设置组的左转非机动车在待行区等待,冲突率降为0;而对比组近50%的非机动车违规左转,冲突严重。设置左转非机动车信号灯的改善效果随非机动车流量的增大呈先增加后降低趋势,而随机动车流量的增大呈逐步波动下降趋势。【结论】本研究揭示了非机动车左转信号灯的设置对减少交叉口交通冲突的影响,可为城市交叉口非机动车交通安全管控提供有益参考。
文摘为定量识别城市非信控环形交叉口区域内的机动车冲突风险易发生点,降低环形交叉口的事故发生率,本文构建针对非信控环形交叉口机动车冲突风险识别模型。首先,利用无人机采集高精度、连续的多车辆轨迹视频,结合Kinovea视频运动分析软件实现运行车辆状态识别与跟踪,并记录车辆每一帧的运动数据;其次,基于交通冲突识别指标TTC(Time to Collision),提出适应环形交叉口道路线形特征的车辆TTC计算方法,并使用累计频率法确定严重、一般和轻微冲突的阈值分别为1.2,2.8,4.4 s;最后,通过绘制高峰和平峰交通冲突空间异步图,并结合交通冲突数和严重冲突率,对环形交叉口的36个子区段进行交通冲突风险等级评定。研究结果显示:在高峰时段,某一子区段的平均交通冲突发生次数约为15次,严重冲突率为17.45%;在平峰时段,某一子区段的平均交通冲突发生次数约为8次,严重冲突率为8.28%。重度风险区域在高峰时段占比达到50%,而在平峰时段为8.33%,这些重度风险区域主要集中在交织区段。因此,环形交叉口在高峰时段且位于交织区段的情况更易发生交通事故。本文研究成果有助于交通管理部门了解环形交叉口在不同时段和区段上的交通冲突情况和特征,以便采取相应的预警和管理措施。