Electronic Commerce (E-Commerce) was created to help expand the market share network through the internet without the boundaries of space and time. However, behind all the benefits obtained, E-Commerce also raises the...Electronic Commerce (E-Commerce) was created to help expand the market share network through the internet without the boundaries of space and time. However, behind all the benefits obtained, E-Commerce also raises the issue of consumer concerns about the responsibility for personal data that has been recorded and collected by E-Commerce companies. The personal data is in the form of consumer identity names, passwords, debit and credit card numbers, conversations in email, as well as information related to consumer requests. In Indonesia, cyber attacks have occurred several times against 3 major E-Commerce companies in Indonesia. In 2019, users’ personal data in the form of email addresses, telephone numbers, and residential addresses were sold on the deep web at Bukalapak and Tokopedia. Even though E-Commerce affected by the cyber attack already has a Computer Security Incident Response Team (CSIRT) by recruiting various security engineers, both defense and attack, this system still has a weakness, namely that the CSIRT operates in the aspect of handling and experimenting with defense, not yet on how to store data and prepare for forensics. CSIRT will do the same thing again, and so on. This is called an iterative procedure, one day the attack will come back and only be done with technical handling. Previous research has succeeded in revealing that organizations that have Knowledge Management (KM), the organization has succeeded in reducing costs up to four times from the original without using KM in the cyber security operations. The author provides a solution to create a knowledge management strategy for handling cyber incidents in CSIRT E-Commerce in Indonesia. This research resulted in 4 KM Processes and 2 KM Enablers which were then translated into concrete actions. The KM Processes are Knowledge Creation, Knowledge Storing, Knowledge Sharing, and Knowledge Utilizing. While the KM Enabler is Technology Infrastructure and People Competency.展开更多
This paper presents a computer immunology model for computer security, whose main components are defined as idea of Multi Agent. It introduces the natural immune system on the principle, discusses the idea and chara...This paper presents a computer immunology model for computer security, whose main components are defined as idea of Multi Agent. It introduces the natural immune system on the principle, discusses the idea and characteristics of Multi Agent. It gives a system model, and describes the structure and function of each agent. Also, the communication method between agents is described.展开更多
Cyberattacks are difficult to prevent because the targeted companies and organizations are often relying on new and fundamentally insecure cloudbased technologies,such as the Internet of Things.With increasing industr...Cyberattacks are difficult to prevent because the targeted companies and organizations are often relying on new and fundamentally insecure cloudbased technologies,such as the Internet of Things.With increasing industry adoption and migration of traditional computing services to the cloud,one of the main challenges in cybersecurity is to provide mechanisms to secure these technologies.This work proposes a Data Security Framework for cloud computing services(CCS)that evaluates and improves CCS data security from a software engineering perspective by evaluating the levels of security within the cloud computing paradigm using engineering methods and techniques applied to CCS.This framework is developed by means of a methodology based on a heuristic theory that incorporates knowledge generated by existing works as well as the experience of their implementation.The paper presents the design details of the framework,which consists of three stages:identification of data security requirements,management of data security risks and evaluation of data security performance in CCS.展开更多
To fight against malicious codes of P2P networks, it is necessary to study the malicious code propagation model of P2P networks in depth. The epidemic of malicious code threatening P2P systems can be divided into the ...To fight against malicious codes of P2P networks, it is necessary to study the malicious code propagation model of P2P networks in depth. The epidemic of malicious code threatening P2P systems can be divided into the active and passive propagation models and a new passive propagation model of malicious code is proposed, which differentiates peers into 4 kinds of state and fits better for actual P2P networks. From the propagation model of malicious code, it is easy to find that quickly making peers get their patched and upgraded anti-virus system is the key way of immunization and damage control. To distribute patches and immune modules efficiently, a new exponential tree plus (ET+) and vaccine distribution algorithm based on ET+ are also proposed. The performance analysis and test results show that the vaccine distribution algorithm based on ET+ is robust, efficient and much more suitable for P2P networks.展开更多
A new method using support vector data description (SVDD) to distinguishlegitimate users from mas-queradcrs based on UNIX user command sequences is proposed Sliding windowsare used to get low detection delay. Experime...A new method using support vector data description (SVDD) to distinguishlegitimate users from mas-queradcrs based on UNIX user command sequences is proposed Sliding windowsare used to get low detection delay. Experiments demonstrate that the detection effect usingenriched sequences is better than that of using truncated sequences. As a SVDD profile is composedof a small amount of support vectors, our SVDD-based method can achieve computation and storageadvantage when the detection performance issimilar to existing method.展开更多
Currently, most anomaly detection pattern learning algorithms require a set of purely normal data from which they train their model. If the data contain some intrusions buried within the training data, the algorithm m...Currently, most anomaly detection pattern learning algorithms require a set of purely normal data from which they train their model. If the data contain some intrusions buried within the training data, the algorithm may not detect these attacks because it will assume that they are normal. In reality, it is very hard to guarantee that there are no attack items in the collected training data. Focusing on this problem, in this paper, firstly a new anomaly detection measurement is proposed according to the probability characteristics of intrusion instances and normal instances. Secondly, on the basis of anomaly detection measure, we present a clustering-based unsupervised anomaly detection patterns learning algorithm, which can overcome the shortage above. Finally, some experiments are conducted to verify the proposed algorithm is valid.展开更多
Tackling binary program analysis problems has traditionally implied manually defining rules and heuristics,a tedious and time consuming task for human analysts.In order to improve automation and scalability,we propose...Tackling binary program analysis problems has traditionally implied manually defining rules and heuristics,a tedious and time consuming task for human analysts.In order to improve automation and scalability,we propose an alternative direction based on distributed representations of binary programs with applicability to a number of downstream tasks.We introduce Bin2vec,a new approach leveraging Graph Convolutional Networks(GCN)along with computational program graphs in order to learn a high dimensional representation of binary executable programs.We demonstrate the versatility of this approach by using our representations to solve two semantically different binary analysis tasks–functional algorithm classification and vulnerability discovery.We compare the proposed approach to our own strong baseline as well as published results,and demonstrate improvement over state-of-the-art methods for both tasks.We evaluated Bin2vec on 49191 binaries for the functional algorithm classification task,and on 30 different CWE-IDs including at least 100 CVE entries each for the vulnerability discovery task.We set a new state-of-the-art result by reducing the classification error by 40%compared to the source-code based inst2vec approach,while working on binary code.For almost every vulnerability class in our dataset,our prediction accuracy is over 80%(and over 90%in multiple classes).展开更多
In this paper, a new scheme that uses digraph substitution rules to conceal the mechanism or activity re- quired to derive password-images is proposed. In the pro- posed method, a user is only required to click on one...In this paper, a new scheme that uses digraph substitution rules to conceal the mechanism or activity re- quired to derive password-images is proposed. In the pro- posed method, a user is only required to click on one of the pass-image instead of both pass-images shown in each chal- lenge set for three consecutive sets. While this activity is sim- ple enough to reduce login time, the images clicked appear to be random and can only be obtained with complete knowl- edge of the registered password along with the activity rules. Thus, it becomes impossible for shoulder-surfing attackers to obtain the information about which password images and pass-images are used by the user. Although the attackers may know about the digraph substitution rules used in the pro- posed method, the scenario information used in each chal- lenge set remains. User study results reveal an average login process of less than half a minute. In addition, the proposed method is resistant to shoulder-surfing attacks.展开更多
File entropy is one of the major indicators of crypto-ransomware because the encryption by ransomware increases the randomness of file contents.However,entropy-based ransomware detection has certain limitations;for ex...File entropy is one of the major indicators of crypto-ransomware because the encryption by ransomware increases the randomness of file contents.However,entropy-based ransomware detection has certain limitations;for example,when distinguishing ransomware-encrypted files from normal files with inherently high-level entropy,misclassification is very possible.In addition,the entropy evaluation cost for an entire file renders entropy-based detection impractical for large files.In this paper,we propose two indicators based on byte frequency for use in ransomware detection;these are termed EntropySA and DistSA,and both consider the interesting characteristics of certain file subareas termed“sample areas”(SAs).For an encrypted file,both the sampled area and the whole file exhibit high-level randomness,but for a plain file,the sampled area embeds informative structures such as a file header and thus exhibits relatively low-level randomness even though the entire file exhibits high-level randomness.EntropySA and DistSA use“byte frequency”and a variation of byte frequency,respectively,derived from sampled areas.Both indicators cause less overhead than other entropy-based detection methods,as experimentally proven using realistic ransomware samples.To evaluate the effectiveness and feasibility of our indicators,we also employ three expensive but elaborate classification models(neural network,support vector machine and threshold-based approaches).Using these models,our experimental indicators yielded an average Fl-measure of 0.994 and an average detection rate of 99.46%for file encryption attacks by realistic ransomware samples.展开更多
文摘Electronic Commerce (E-Commerce) was created to help expand the market share network through the internet without the boundaries of space and time. However, behind all the benefits obtained, E-Commerce also raises the issue of consumer concerns about the responsibility for personal data that has been recorded and collected by E-Commerce companies. The personal data is in the form of consumer identity names, passwords, debit and credit card numbers, conversations in email, as well as information related to consumer requests. In Indonesia, cyber attacks have occurred several times against 3 major E-Commerce companies in Indonesia. In 2019, users’ personal data in the form of email addresses, telephone numbers, and residential addresses were sold on the deep web at Bukalapak and Tokopedia. Even though E-Commerce affected by the cyber attack already has a Computer Security Incident Response Team (CSIRT) by recruiting various security engineers, both defense and attack, this system still has a weakness, namely that the CSIRT operates in the aspect of handling and experimenting with defense, not yet on how to store data and prepare for forensics. CSIRT will do the same thing again, and so on. This is called an iterative procedure, one day the attack will come back and only be done with technical handling. Previous research has succeeded in revealing that organizations that have Knowledge Management (KM), the organization has succeeded in reducing costs up to four times from the original without using KM in the cyber security operations. The author provides a solution to create a knowledge management strategy for handling cyber incidents in CSIRT E-Commerce in Indonesia. This research resulted in 4 KM Processes and 2 KM Enablers which were then translated into concrete actions. The KM Processes are Knowledge Creation, Knowledge Storing, Knowledge Sharing, and Knowledge Utilizing. While the KM Enabler is Technology Infrastructure and People Competency.
基金Supported by the National Natural Science Foundation of China(6 0 0 730 4370 0 710 42 )
文摘This paper presents a computer immunology model for computer security, whose main components are defined as idea of Multi Agent. It introduces the natural immune system on the principle, discusses the idea and characteristics of Multi Agent. It gives a system model, and describes the structure and function of each agent. Also, the communication method between agents is described.
文摘Cyberattacks are difficult to prevent because the targeted companies and organizations are often relying on new and fundamentally insecure cloudbased technologies,such as the Internet of Things.With increasing industry adoption and migration of traditional computing services to the cloud,one of the main challenges in cybersecurity is to provide mechanisms to secure these technologies.This work proposes a Data Security Framework for cloud computing services(CCS)that evaluates and improves CCS data security from a software engineering perspective by evaluating the levels of security within the cloud computing paradigm using engineering methods and techniques applied to CCS.This framework is developed by means of a methodology based on a heuristic theory that incorporates knowledge generated by existing works as well as the experience of their implementation.The paper presents the design details of the framework,which consists of three stages:identification of data security requirements,management of data security risks and evaluation of data security performance in CCS.
基金supported by the National Natural Science Foundation of China (60573141,60773041)National High Technology Research and Development Program of China (863 Program) (2006AA01Z439+12 种基金2007AA01Z404 2007AA01Z478)the Natural Science Foundation of Jiangsu Province (BK2008451)Science & Technology Project of Jiangsu Province (BE2009158)the Natural Science Foundation of Higher Education Institutions of Jiangsu Province (09KJB520010 09KJB520009)the Research Fund for the Doctoral Program of Higher Education(2009 3223120001)the Sepcialized Research Fund of Ministry of Education (2009117)High Technology Research Program of Nanjing(2007RZ127)Foundation of National Laboratory for Modern Communications (9140C1105040805)Postdoctoral Foundation of Jiangsu Province (0801019C)Science & Technology Innovation Fundfor Higher Education Institutions of Jiangsu Province (CX08B-085ZCX08B-086Z)
文摘To fight against malicious codes of P2P networks, it is necessary to study the malicious code propagation model of P2P networks in depth. The epidemic of malicious code threatening P2P systems can be divided into the active and passive propagation models and a new passive propagation model of malicious code is proposed, which differentiates peers into 4 kinds of state and fits better for actual P2P networks. From the propagation model of malicious code, it is easy to find that quickly making peers get their patched and upgraded anti-virus system is the key way of immunization and damage control. To distribute patches and immune modules efficiently, a new exponential tree plus (ET+) and vaccine distribution algorithm based on ET+ are also proposed. The performance analysis and test results show that the vaccine distribution algorithm based on ET+ is robust, efficient and much more suitable for P2P networks.
基金Supported by the National Natural Science Foundation of China(90104005,66973034,60473023).
文摘A new method using support vector data description (SVDD) to distinguishlegitimate users from mas-queradcrs based on UNIX user command sequences is proposed Sliding windowsare used to get low detection delay. Experiments demonstrate that the detection effect usingenriched sequences is better than that of using truncated sequences. As a SVDD profile is composedof a small amount of support vectors, our SVDD-based method can achieve computation and storageadvantage when the detection performance issimilar to existing method.
文摘Currently, most anomaly detection pattern learning algorithms require a set of purely normal data from which they train their model. If the data contain some intrusions buried within the training data, the algorithm may not detect these attacks because it will assume that they are normal. In reality, it is very hard to guarantee that there are no attack items in the collected training data. Focusing on this problem, in this paper, firstly a new anomaly detection measurement is proposed according to the probability characteristics of intrusion instances and normal instances. Secondly, on the basis of anomaly detection measure, we present a clustering-based unsupervised anomaly detection patterns learning algorithm, which can overcome the shortage above. Finally, some experiments are conducted to verify the proposed algorithm is valid.
文摘Tackling binary program analysis problems has traditionally implied manually defining rules and heuristics,a tedious and time consuming task for human analysts.In order to improve automation and scalability,we propose an alternative direction based on distributed representations of binary programs with applicability to a number of downstream tasks.We introduce Bin2vec,a new approach leveraging Graph Convolutional Networks(GCN)along with computational program graphs in order to learn a high dimensional representation of binary executable programs.We demonstrate the versatility of this approach by using our representations to solve two semantically different binary analysis tasks–functional algorithm classification and vulnerability discovery.We compare the proposed approach to our own strong baseline as well as published results,and demonstrate improvement over state-of-the-art methods for both tasks.We evaluated Bin2vec on 49191 binaries for the functional algorithm classification task,and on 30 different CWE-IDs including at least 100 CVE entries each for the vulnerability discovery task.We set a new state-of-the-art result by reducing the classification error by 40%compared to the source-code based inst2vec approach,while working on binary code.For almost every vulnerability class in our dataset,our prediction accuracy is over 80%(and over 90%in multiple classes).
文摘In this paper, a new scheme that uses digraph substitution rules to conceal the mechanism or activity re- quired to derive password-images is proposed. In the pro- posed method, a user is only required to click on one of the pass-image instead of both pass-images shown in each chal- lenge set for three consecutive sets. While this activity is sim- ple enough to reduce login time, the images clicked appear to be random and can only be obtained with complete knowl- edge of the registered password along with the activity rules. Thus, it becomes impossible for shoulder-surfing attackers to obtain the information about which password images and pass-images are used by the user. Although the attackers may know about the digraph substitution rules used in the pro- posed method, the scenario information used in each chal- lenge set remains. User study results reveal an average login process of less than half a minute. In addition, the proposed method is resistant to shoulder-surfing attacks.
基金supported in part by the National Natural Science Foundation of China under Grant No.61806142the Natural Science Foundation of Tianjin under Grant No.18JCYBJC44000+1 种基金the Tianjin Science and Technology Program under Grant No.19PTZWHZ00020the Institute for Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(Training Key Talents in Industrial Convergence Security)under Grant No.2019-0-01343.
文摘File entropy is one of the major indicators of crypto-ransomware because the encryption by ransomware increases the randomness of file contents.However,entropy-based ransomware detection has certain limitations;for example,when distinguishing ransomware-encrypted files from normal files with inherently high-level entropy,misclassification is very possible.In addition,the entropy evaluation cost for an entire file renders entropy-based detection impractical for large files.In this paper,we propose two indicators based on byte frequency for use in ransomware detection;these are termed EntropySA and DistSA,and both consider the interesting characteristics of certain file subareas termed“sample areas”(SAs).For an encrypted file,both the sampled area and the whole file exhibit high-level randomness,but for a plain file,the sampled area embeds informative structures such as a file header and thus exhibits relatively low-level randomness even though the entire file exhibits high-level randomness.EntropySA and DistSA use“byte frequency”and a variation of byte frequency,respectively,derived from sampled areas.Both indicators cause less overhead than other entropy-based detection methods,as experimentally proven using realistic ransomware samples.To evaluate the effectiveness and feasibility of our indicators,we also employ three expensive but elaborate classification models(neural network,support vector machine and threshold-based approaches).Using these models,our experimental indicators yielded an average Fl-measure of 0.994 and an average detection rate of 99.46%for file encryption attacks by realistic ransomware samples.