xgboostpp icon indicating copy to clipboard operation
xgboostpp copied to clipboard

python模型.

Open stidk opened this issue 2 years ago • 5 comments

请问一下,这个xgboost的python模型代码有没有.我自己训练的好像不管用.

stidk avatar Mar 30 '22 09:03 stidk

请问一下,这个xgboost的python模型代码有没有.我自己训练的好像不管用.

自己训练的模型按照输出输出适配一下就可以了吧

AmazureBUPT avatar Mar 31 '22 07:03 AmazureBUPT

python:

import xgboost as xgb
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
 
if __name__ == '__main__':
    iris_feature_E = "sepal lenght", "sepal width", "petal length", "petal width"
    iris_feature = "the length of sepal", "the width of sepal", "the length of petal", "the width of petal"
    iris_class = "Iris-setosa", "Iris-versicolor", "Iris-virginica"
    
    data = pd.read_csv("/media/dk/2eee4ea8-6028-41ef-89c5-8f36a982bc1d/iris.csv", header=None)
    iris_types = data[4].unique()
    print(iris_types)
    for i, type in enumerate(iris_types):
        data.at[data[4] == type] = i
    x, y = np.split(data.values, (4,), axis=1)
#     print('y:', y)
 
    x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.7, random_state=1)
 
    data_train = xgb.DMatrix(x_train, label=y_train)
    data_test = xgb.DMatrix(x_test, label=y_test)
    watchlist = [(data_test, 'eval'), (data_train, 'train')]
    param = {'max_depth':3, 'eta':1, 'silent':1, 'objective':'multi:softmax', 'num_class':3}
 
    bst = xgb.train(param, data_train, num_boost_round=10, evals=watchlist)
    bst.save_model('xgb.model')
    y_hat = bst.predict(data_test)
    result = y_test.reshape(1, -1) == y_hat
    print('the accuracy:\t', float(np.sum(result)) / len(y_hat))

我是要把那些参数都通过XGBoosterSetParam方法set一下还是咋样,还是只能用from xgboost import XGBClassifier

stidk avatar Mar 31 '22 07:03 stidk

@AmazureBUPT

stidk avatar Mar 31 '22 08:03 stidk

@stidk 请问这个问题解决了吗

JohnComeon avatar Apr 04 '23 06:04 JohnComeon

额,解决肯定是解决了,但是时间太久了,代码都忘了 @JohnComeon

stidk avatar Apr 04 '23 06:04 stidk