evaluation
Evaluation App.
The pyodi evaluation
app can be used to evaluate
the predictions of an object detection dataset.
Example usage:
pyodi evaluation "data/COCO/COCO_val2017.json" "data/COCO/COCO_val2017_predictions.json"
This app shows the Average Precision for different IoU values and different areas, the Average Recall for different IoU values and differents maximum detections.
An example of the result of executing this app:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.256
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.438
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.263
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.068
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.278
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.422
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.239
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.353
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.375
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.122
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.416
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.586
API REFERENCE
evaluation(ground_truth_file, predictions_file, string_to_match=None)
Evaluate the predictions of a dataset.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ground_truth_file |
str
|
Path to COCO ground truth file. |
required |
predictions_file |
str
|
Path to COCO predictions file. |
required |
string_to_match |
Optional[str]
|
If not None, only images whose file_name match this parameter will be evaluated. |
None
|
Source code in pyodi/apps/evaluation.py
44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 |
|