Interpret metrics, graphs, and confusion matrices to judge model performance.
Metric | Question it answers | Good value |
---|---|---|
Precision | How many predicted boxes are correct? | >0.9 |
Recall | How many ground-truth objects were found? | >0.9 |
mAP50 | Average precision @ IoU 0.50 | 0.85β0.95 |
mAP50-95 | mAP averaged over 0.50β0.95 IoU | 0.45β0.65 |
mAP50
is a high-level accuracy gauge, while mAP50-95
evaluates box
tightness.Symptom | Likely Cause | Remedy |
---|---|---|
Validation loss β while training loss β | Over-fitting | Add data, early-stop |
High precision, low recall | Conservative confidence | Lower NMS-conf, larger model |
Low precision, high recall | Noisy labels | Clean data, raise confidence |
Both low | Model too small | Try l or xl variant, increase epochs |
val/mAP50
flattens or Early-stopping
triggers after patience
epochs.