You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From review by @HarveySouth
Could: extend hyperparameter tuning tutorial with a test set evaluation from the best model. It would be interesting to discuss, but my own idea of reporting a good estimate of model performance involves finding the best hyperparameters that maximize the desired metric on the validation set, and then applying exactly the same training process to the model, trained on the training and validation data aggregated, tested against the test set since step/epoch are hyperparameters the final model inference metric is reportable. It is of course possible to report just the whole training process (which should be done anyway) and leave interpretation to the user/reader, but not sure what a best practice in this case would look like.
The text was updated successfully, but these errors were encountered:
From review by @HarveySouth
Could: extend hyperparameter tuning tutorial with a test set evaluation from the best model. It would be interesting to discuss, but my own idea of reporting a good estimate of model performance involves finding the best hyperparameters that maximize the desired metric on the validation set, and then applying exactly the same training process to the model, trained on the training and validation data aggregated, tested against the test set since step/epoch are hyperparameters the final model inference metric is reportable. It is of course possible to report just the whole training process (which should be done anyway) and leave interpretation to the user/reader, but not sure what a best practice in this case would look like.
The text was updated successfully, but these errors were encountered: