Distinguishing between web traffic generated by bots and humans is an important task in the evaluation of online marketing campaigns.One of the main challenges is related to only partial availability of the performanc...Distinguishing between web traffic generated by bots and humans is an important task in the evaluation of online marketing campaigns.One of the main challenges is related to only partial availability of the performance metrics:although some users can be unambiguously classified as bots,the correct label is uncertain in many cases.This calls for the use of classifiers capable of explaining their decisions.This paper demonstrates two such mechanisms based on features carefully engineered from web logs.The first is a man-made rule-based system.The second is a hierarchical model that first performs clustering and next classification using human-centred,interpretable methods.The stability of the proposed methods is analyzed and a minimal set of features that convey the classdiscriminating information is selected.The proposed data processing and analysis methodology are successfully applied to real-world data sets from online publishers.展开更多
在互联网时代,应用层的分布式拒绝服务(Distributed Denial of Service,DDoS)攻击已经成为公共网络的一大威胁,导致许多服务器无法提供服务并遭受严重破坏。为了应对这类攻击,提出一种综合防范策略。分析攻击行为的原理和方式,了解用户...在互联网时代,应用层的分布式拒绝服务(Distributed Denial of Service,DDoS)攻击已经成为公共网络的一大威胁,导致许多服务器无法提供服务并遭受严重破坏。为了应对这类攻击,提出一种综合防范策略。分析攻击行为的原理和方式,了解用户行为的差异性,设计流量监控系统,实时监测网络流量,并在检测到异常流量时及时警示管理员采取应对措施。此外,通过维护Web服务器的黑名单和使用数据过滤等技术,有效屏蔽不必要的流量。通过综合运用这些策略,可以有效防范应用层的分布式拒绝服务攻击,确保服务器的正常运行。展开更多
基金supported by the ABT SHIELD(Anti-Bot and Trolls Shield)project at the Systems Research Institute,Polish Academy of Sciences,in cooperation with EDGE NPDRPMA.01.02.00-14-B448/18-00 funded by the Regional Development Fund for the development of Mazovia.
文摘Distinguishing between web traffic generated by bots and humans is an important task in the evaluation of online marketing campaigns.One of the main challenges is related to only partial availability of the performance metrics:although some users can be unambiguously classified as bots,the correct label is uncertain in many cases.This calls for the use of classifiers capable of explaining their decisions.This paper demonstrates two such mechanisms based on features carefully engineered from web logs.The first is a man-made rule-based system.The second is a hierarchical model that first performs clustering and next classification using human-centred,interpretable methods.The stability of the proposed methods is analyzed and a minimal set of features that convey the classdiscriminating information is selected.The proposed data processing and analysis methodology are successfully applied to real-world data sets from online publishers.
文摘在互联网时代,应用层的分布式拒绝服务(Distributed Denial of Service,DDoS)攻击已经成为公共网络的一大威胁,导致许多服务器无法提供服务并遭受严重破坏。为了应对这类攻击,提出一种综合防范策略。分析攻击行为的原理和方式,了解用户行为的差异性,设计流量监控系统,实时监测网络流量,并在检测到异常流量时及时警示管理员采取应对措施。此外,通过维护Web服务器的黑名单和使用数据过滤等技术,有效屏蔽不必要的流量。通过综合运用这些策略,可以有效防范应用层的分布式拒绝服务攻击,确保服务器的正常运行。