scorecardpy icon indicating copy to clipboard operation
scorecardpy copied to clipboard

Scorecard Development in python, 评分卡

Results 47 scorecardpy issues
Sort by recently updated
recently updated
newest added

谢老师,您好!我在使用评分卡模型包的时候,发现var_filter中计算出来的iv值和使用woebin中计算出来的iv是不一样,前一个是没有分箱直接计算出来的iv,后一个是通过分箱计算出来的iv。那么问题来了,为什么变量筛选这一步不放在分箱后进行,而是一开始就进行变量筛选?

sc.woebin的时候报这个错,是什么意思呀,有没有大佬帮忙解答一下

Hi, ![image](https://user-images.githubusercontent.com/17172507/104402675-d4110180-5524-11eb-99fc-7afab99052b2.png) few days ago after updating the PANDAS library to version 1.2.0, the "woebin" function of scorerapy version '0.1.9.2' stopped working. When trying to execute it, the error is...

Currently I am using scorecardpy version 0.1.9.2 and I am trying to plot plot using `woebin_plot() ` and it is generating plots. but when there is lengthy string in "bin"...

import scorecardpy as sc dat = sc.germancredit() bins = sc.woebin(dt_s, y="creditability") 运行这一步的时候报错,错误提示为: TypeError: unhashable type: 'numpy.ndarray' 是不是源码有异常? 使用的版本是今天安装的最新版

Normally,it is essential for the bins besides missing to be monotonically increasing or decreasing on the badrate for better explainability. For exmple, the bigger a certain feature X1,the better chance...

当我使用`sc.perf_eva(y_train, train_pred , plot_type = ["ks"])`计算KS时,KS曲线的定义貌似不是正确的。 正确的KS曲线,应该是以阈值点为x轴,TPR和FPR为y轴,显然源码中计算是错误。这里附上我,参考网上的部分代码,并自己更改后的代码,希望谢老师能参考一下: ``` def model_score_pro_ks(df_fact_tag, df_expected_score_or_pro, buckets, type_input='score'): # 初始等间隔分段区间列表 breakpoints = np.arange(0, buckets + 1) / (buckets) # 将预期得分,按照等距分箱,分成buckets个,返回的是每段的上下界限,array # input指的是,初始分箱间隔,组成的列表 # min和max指的是,df_expected_score_or_pro的最大最小值 def...

Hi! Sometimes I need to keep different versions of scorecardpy with some modifications at the same place, e.g. `scorecardpy_019/`, `scorecardpy_0193ab/` etc. To make this possible the sub-modules must be imported...

Thanks for putting this package together - it's great to have a scorecard package in Python. However, it appears the weight of evidence binning algorithm only works with complete data,...