![]() ![]() # define the function we want to minimise def objective(n_estimators): model = RandomForestClassifier(n_estimators=n_estimators, max_features='sqrt', random_state=42) model.fit(x_train, y_train) y_pred = model.predict(x_test) accuracy = accuracy_score(y_test, y_pred) return # implement Hyperopt best_params = fmin( fn=objective, space=search_space, algo=algorithm, max_evals=200) The reason we take the negative value of the accuracy is because Hyperopt’s aim is minimise the objective, hence our accuracy needs to be negative and we can just make it positive at the end. This will be a function of ‘n_estimators’ only and it will return the minus accuracy inferred from the ‘accuracy_score’ function. In this example, we will just tune in respect to one hyperparameter which will be ‘n_estimators.’įirst read in Hyperopt: # read in hyperopt values from hyperopt import fmin, hp, tpe, Trials, space_eval, STATUS_OK In the section below, we will show an example of how to implement the above steps for the simple Random Forest model that we created above. This is recommended to be between 10–30 times the number of hyperparameters defined in the search space to optimise for performance and computation time.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |