Background:his paper presents a case study on 100Credit,an Internet credit service provider in China.100Credit began as an IT company specializing in e-commerce recommendation before getting into the credit rating bus...Background:his paper presents a case study on 100Credit,an Internet credit service provider in China.100Credit began as an IT company specializing in e-commerce recommendation before getting into the credit rating business.The company makes use of Big Data on multiple aspects of individuals’online activities to infer their potential credit risk.Methods:Based on 100Credit’s business practices,this paper summarizes four aspects related to the value of Big Data in Internet credit services.Results:1)value from large data volume that provides access to more borrowers;2)value from prediction correctness in reducing lenders’operational cost;3)value from the variety of services catering to different needs of lenders;and 4)value from information protection to sustain credit service businesses.Conclusion:The paper also discusses the opportunities and challenges of Big Databased credit risk analysis,which needs to be improved in future research and practice.展开更多
Personal credit risk assessment is an important part of the development of financial enterprises. Big data credit investigation is an inevitable trend of personal credit risk assessment, but some data are missing and ...Personal credit risk assessment is an important part of the development of financial enterprises. Big data credit investigation is an inevitable trend of personal credit risk assessment, but some data are missing and the amount of data is small, so it is difficult to train. At the same time, for different financial platforms, we need to use different models to train according to the characteristics of the current samples, which is time-consuming. <span style="font-family:Verdana;">In view of</span><span style="font-family:Verdana;"> these two problems, this paper uses the idea of transfer learning to build a transferable personal credit risk model based on Instance-based Transfer Learning (Instance-based TL). The model balances the weight of the samples in the source domain, and migrates the existing large dataset samples to the target domain of small samples, and finds out the commonness between them. At the same time, we have done a lot of experiments on the selection of base learners, including traditional machine learning algorithms and ensemble learning algorithms, such as decision tree, logistic regression, </span><span style="font-family:Verdana;">xgboost</span> <span style="font-family:Verdana;">and</span><span style="font-family:Verdana;"> so on. The datasets are from P2P platform and bank, the results show that the AUC value of Instance-based TL is 24% higher than that of the traditional machine learning model, which fully proves that the model in this paper has good application value. The model’s evaluation uses AUC, prediction, recall, F1. These criteria prove that this model has good application value from many aspects. At present, we are trying to apply this model to more fields to improve the robustness and applicability of the model;on the other hand, we are trying to do more in-depth research on domain adaptation to enrich the model.</span>展开更多
The advancements of mobile devices, public networks and the Internet of creature huge amounts of complex data, both construct & unstructured are being captured in trust to allow organizations to produce better bus...The advancements of mobile devices, public networks and the Internet of creature huge amounts of complex data, both construct & unstructured are being captured in trust to allow organizations to produce better business decisions as data is now pivotal for an organizations success. These enormous amounts of data are referred to as Big Data, which enables a competitive advantage over rivals when processed and analyzed appropriately. However Big Data Analytics has a few concerns including Management of Data, Privacy & Security, getting optimal path for transport data, and Data Representation. However, the structure of network does not completely match transportation demand, i.e., there still exist a few bottlenecks in the network. This paper presents a new approach to get the optimal path of valuable data movement through a given network based on the knapsack problem. This paper will give value for each piece of data, it depends on the importance of this data (each piece of data defined by two arguments size and value), and the approach tries to find the optimal path from source to destination, a mathematical models are developed to adjust data flows between their shortest paths based on the 0 - 1 knapsack problem. We also take out computational experience using the commercial software Gurobi and a greedy algorithm (GA), respectively. The outcome indicates that the suggest models are active and workable. This paper introduced two different algorithms to study the shortest path problems: the first algorithm studies the shortest path problems when stochastic activates and activities does not depend on weights. The second algorithm studies the shortest path problems depends on weights.展开更多
文摘Background:his paper presents a case study on 100Credit,an Internet credit service provider in China.100Credit began as an IT company specializing in e-commerce recommendation before getting into the credit rating business.The company makes use of Big Data on multiple aspects of individuals’online activities to infer their potential credit risk.Methods:Based on 100Credit’s business practices,this paper summarizes four aspects related to the value of Big Data in Internet credit services.Results:1)value from large data volume that provides access to more borrowers;2)value from prediction correctness in reducing lenders’operational cost;3)value from the variety of services catering to different needs of lenders;and 4)value from information protection to sustain credit service businesses.Conclusion:The paper also discusses the opportunities and challenges of Big Databased credit risk analysis,which needs to be improved in future research and practice.
文摘Personal credit risk assessment is an important part of the development of financial enterprises. Big data credit investigation is an inevitable trend of personal credit risk assessment, but some data are missing and the amount of data is small, so it is difficult to train. At the same time, for different financial platforms, we need to use different models to train according to the characteristics of the current samples, which is time-consuming. <span style="font-family:Verdana;">In view of</span><span style="font-family:Verdana;"> these two problems, this paper uses the idea of transfer learning to build a transferable personal credit risk model based on Instance-based Transfer Learning (Instance-based TL). The model balances the weight of the samples in the source domain, and migrates the existing large dataset samples to the target domain of small samples, and finds out the commonness between them. At the same time, we have done a lot of experiments on the selection of base learners, including traditional machine learning algorithms and ensemble learning algorithms, such as decision tree, logistic regression, </span><span style="font-family:Verdana;">xgboost</span> <span style="font-family:Verdana;">and</span><span style="font-family:Verdana;"> so on. The datasets are from P2P platform and bank, the results show that the AUC value of Instance-based TL is 24% higher than that of the traditional machine learning model, which fully proves that the model in this paper has good application value. The model’s evaluation uses AUC, prediction, recall, F1. These criteria prove that this model has good application value from many aspects. At present, we are trying to apply this model to more fields to improve the robustness and applicability of the model;on the other hand, we are trying to do more in-depth research on domain adaptation to enrich the model.</span>
文摘The advancements of mobile devices, public networks and the Internet of creature huge amounts of complex data, both construct & unstructured are being captured in trust to allow organizations to produce better business decisions as data is now pivotal for an organizations success. These enormous amounts of data are referred to as Big Data, which enables a competitive advantage over rivals when processed and analyzed appropriately. However Big Data Analytics has a few concerns including Management of Data, Privacy & Security, getting optimal path for transport data, and Data Representation. However, the structure of network does not completely match transportation demand, i.e., there still exist a few bottlenecks in the network. This paper presents a new approach to get the optimal path of valuable data movement through a given network based on the knapsack problem. This paper will give value for each piece of data, it depends on the importance of this data (each piece of data defined by two arguments size and value), and the approach tries to find the optimal path from source to destination, a mathematical models are developed to adjust data flows between their shortest paths based on the 0 - 1 knapsack problem. We also take out computational experience using the commercial software Gurobi and a greedy algorithm (GA), respectively. The outcome indicates that the suggest models are active and workable. This paper introduced two different algorithms to study the shortest path problems: the first algorithm studies the shortest path problems when stochastic activates and activities does not depend on weights. The second algorithm studies the shortest path problems depends on weights.