Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent confusion matrix #1960

Closed
saumitrabg opened this issue Jan 17, 2021 · 10 comments · Fixed by #2046
Closed

Inconsistent confusion matrix #1960

saumitrabg opened this issue Jan 17, 2021 · 10 comments · Fixed by #2046
Labels
question Further information is requested

Comments

@saumitrabg
Copy link

❔Question

We are training with yolov5l.pt as a baseline with 20K+ training and 4K val datasets. There are 6 classes. 1st one shows a decent confusion matrix where I can see a high TP of 50-60%. However, with the latest epochs of new training, we are consistently seeing a high mAP (89-95%), high recall 90%+, high precision (70-80%) but the confusion matrix shows a weird matrix. As the training progresses and mAP gets better, we try the intermediate best.pt and we can detect variations of all 6 classes but over time, we only see 1-2 classes being predicted which are wrong. Background FP is 1.0. Now, for all our training, this is happening consistently.

We don't claim to get to the bottom of it and will appreciate any pointers. The img sizes are 2448x784 and we are using --rect and img 1024.
train.py --rect --img 1024 --batch 2 --epochs 50 --data coco128.yaml --weights yolov5l.pt

We will much appreciate some pointers.
confusion_matrix
confusion_matrix (1)

Additional context

@saumitrabg saumitrabg added the question Further information is requested label Jan 17, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Jan 17, 2021

👋 Hello @saumitrabg, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at [email protected].

Requirements

Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install run:

$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@saumitrabg
Copy link
Author

Reports.
reportYolov5.pdf

@glenn-jocher
Copy link
Member

@saumitrabg confusion matrix operates correctly. Results are a function of input parameters. You may want to review parameters here:

yolov5/utils/metrics.py

Lines 107 to 114 in e8a41e8

class ConfusionMatrix:
# Updated version of https://github.com/kaanakan/object_detection_confusion_matrix
def __init__(self, nc, conf=0.25, iou_thres=0.45):
self.matrix = np.zeros((nc + 1, nc + 1))
self.nc = nc # number of classes
self.conf = conf
self.iou_thres = iou_thres

and review these threads:
#1474
kaanakan/object_detection_confusion_matrix#7

@saumitrabg
Copy link
Author

Thank you @glenn-jocher. #1474 is interesting and we will take a look.

@glenn-jocher
Copy link
Member

glenn-jocher commented Jan 17, 2021

@saumitrabg yes, also keep in mind that object detection confusion matrices are quite new, and substantially different from the more common classification confusion matrices.

Primarily the background will produce significant FPs at lower conf thresholds, and you should note that the confusion matrix conf is independent of the --conf argument passed to test.py. mAP benefits from the lowest --conf possible (i.e. 0.0 is best), whereas the confusion matrix will 'look' best at higher values, i.e. conf=0.9 in metrics.py L109 as the experiments I ran in kaanakan/object_detection_confusion_matrix#7 show.

@saumitrabg
Copy link
Author

Got it. I followed that thread you had with Kaanakan where you tried with different conf. When I tried with conf 0.6, it just shows no improvement, though it shows a high mAP score with the xl model.
confusion_matrix (2)

@glenn-jocher
Copy link
Member

glenn-jocher commented Jan 18, 2021

@saumitrabg its unclear to me that you should expect your confusion matrix to look any certain way at a given confidence level based on your mAP. It's highly uncorrelated from mAP, which is evaluated at all confidence levels.

A dataset may produce a million FPs and a thousand TPs and still produce excellent mAP if all the FPs are gathered to the right of the PR curve, and the confusion matrix sidesteps this subtly completely.

@glenn-jocher glenn-jocher linked a pull request Jan 26, 2021 that will close this issue
@glenn-jocher
Copy link
Member

@saumitrabg a confusion matrix bug was recently discovered and fixed in PR #2046. Please git pull to receive this update and let us know if this addresses your original issue.

@saumitrabg
Copy link
Author

@glenn-jocher Fantastic. We pulled in the latest and we can see a much better confusion matrix.
image

@glenn-jocher
Copy link
Member

Looks great 😃

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants