-
-
Notifications
You must be signed in to change notification settings - Fork 16.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistent confusion matrix #1960
Comments
👋 Hello @saumitrabg, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution. If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available. For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at [email protected]. RequirementsPython 3.8 or later with all requirements.txt dependencies installed, including $ pip install -r requirements.txt EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit. |
Reports. |
@saumitrabg confusion matrix operates correctly. Results are a function of input parameters. You may want to review parameters here: Lines 107 to 114 in e8a41e8
and review these threads: |
Thank you @glenn-jocher. #1474 is interesting and we will take a look. |
@saumitrabg yes, also keep in mind that object detection confusion matrices are quite new, and substantially different from the more common classification confusion matrices. Primarily the background will produce significant FPs at lower |
@saumitrabg its unclear to me that you should expect your confusion matrix to look any certain way at a given confidence level based on your mAP. It's highly uncorrelated from mAP, which is evaluated at all confidence levels. A dataset may produce a million FPs and a thousand TPs and still produce excellent mAP if all the FPs are gathered to the right of the PR curve, and the confusion matrix sidesteps this subtly completely. |
@saumitrabg a confusion matrix bug was recently discovered and fixed in PR #2046. Please |
@glenn-jocher Fantastic. We pulled in the latest and we can see a much better confusion matrix. |
Looks great 😃 |
❔Question
We are training with yolov5l.pt as a baseline with 20K+ training and 4K val datasets. There are 6 classes. 1st one shows a decent confusion matrix where I can see a high TP of 50-60%. However, with the latest epochs of new training, we are consistently seeing a high mAP (89-95%), high recall 90%+, high precision (70-80%) but the confusion matrix shows a weird matrix. As the training progresses and mAP gets better, we try the intermediate best.pt and we can detect variations of all 6 classes but over time, we only see 1-2 classes being predicted which are wrong. Background FP is 1.0. Now, for all our training, this is happening consistently.
We don't claim to get to the bottom of it and will appreciate any pointers. The img sizes are 2448x784 and we are using --rect and img 1024.
train.py --rect --img 1024 --batch 2 --epochs 50 --data coco128.yaml --weights yolov5l.pt
We will much appreciate some pointers.
Additional context
The text was updated successfully, but these errors were encountered: