-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Confidence Threshold Effect on Results #7
Comments
I suppose the way to read this is that at confidence 0.90 there is very little confusion between classes. At confidence 0.25 there is greater confusion, but not necessarily between classes, moreso simply between detections and backgrounds, and then at confidence 0.001 the vast majority of detections are FPs (and actually background). Oddly the person-background FN cell stays the same throughout, around 0.40. Not sure what that indicates. |
Hi, Firstly, sorry for the late response. the confidence threshold effects are expected, I think. In conf 0.001, there should be a lot of false alarms, and in conf 0.90, there should be a few to no false alarms. For the second question, currently, I do not have any answers. It maybe caused because the number of objects in person class is significantly higher than the other classes in Pascal VOC dataset. Please feel free to ask any further questions. |
@kaanakan thanks! Confusion matrix is integrated now and automatically produced at the end of trianing YOLOv5. Seems to be working well. |
I have implemented YOLO on my own dataset and when I plot the confusion matrix, it displays an additional class named 'background'. I didn't include this class while training. Can someone please explain this? |
My Two Cents while learning ML: |
for these above, how calculate accuracy of the model from this confusion matrics? |
Hi @kaanakan , I have confused the person-background FN stay around 0.40. How to reduce the confusion on background FN in the person class? does it affect the model results? Could you explain more clearly? |
Hi, @glenn-jocher . May I confirm if you used I hope for your kind response. Thank you. |
Hi, I have this confusion matrix implementation integrated into our YOLOv5 PR here:
ultralytics/yolov5#1474
I noticed during testing that the results depend significantly on the confidence threshold used. I ran an experiment across 3 different common confidence thresholds, but I'm not sure what conclusion to draw from the results.
conf 0.001
conf 0.25
conf 0.90
The text was updated successfully, but these errors were encountered: