Display ConfusionMatrix in StdOut (introduces different meaning of rows and cols to some implementations)#348
Merged
klemen1999 merged 5 commits intomainfrom Feb 23, 2026
Merged
Conversation
dtronmans
reviewed
Feb 23, 2026
Contributor
dtronmans
left a comment
There was a problem hiding this comment.
I tested with instance segmentation and it works correctly. For instance segmentation we have two sub-metrics, one for detection and one for segmentation, so maybe here in the artifact keys we can just change it to {sub_name}.json instead of confusion_matrix.json? So that they are logged separately as detection_confusion_matrix.json and segmentation_confusion_matrix.json instead of both mapping to confusion_matrix.json
I verified with this script:
config= "configs/crack_instance_segmentation_constantlr.yaml"
model = LuxonisModel(
config,
{
"tracker.is_mlflow": True,
"tracker.project_name": "cm_test",
"tracker.run_name": "cm_test_run",
},
)
artifact_keys = model.get_mlflow_logging_keys()["artifacts"]
cm_keys = [k for k in artifact_keys if "confusion_matrix" in k]
for key in cm_keys:
print(key)
With the proposed change I have correctly separate detection and segmentation confusion matrices:
test/metrics/300/PrecisionSegmentBBoxHead/detection_confusion_matrix.json
test/metrics/300/PrecisionSegmentBBoxHead/segmentation_confusion_matrix.json
val/metrics/0/PrecisionSegmentBBoxHead/detection_confusion_matrix.json
val/metrics/0/PrecisionSegmentBBoxHead/segmentation_confusion_matrix.json
Whereas keeping it as it is, both map to the same filename:
val/metrics/0/PrecisionSegmentBBoxHead/confusion_matrix.json
val/metrics/109/PrecisionSegmentBBoxHead/confusion_matrix.json
val/metrics/119/PrecisionSegmentBBoxHead/confusion_matrix.json
Collaborator
Author
|
Nice find, addressed in 578aaff |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Purpose
Until now the confusion matrices were only pushed to the tracker and never displayed in the terminal (stdout). This PR changes this by correctly also showing the confusion matrix under the correct node in which it was used.
Additionally in 8f194d0 it also introduces a breaking change where it unifiys what row and column mean in the matrix for all tasks. Until now the torchmetric and our implementation differed here which can be quite confusing.
Specification
None / not applicable

Multiple matrices under same head (with instance segmentation task)
Multiple matrices under different heads

Implementation when

rich_logging: FalseDependencies & Potential Impact
None / not applicable
This PR changes what rows and columns mean in order to unify all to Torchmetrics notation. With improved visualization and notes in the README I deem this a worthy change because the alternative of keeping it different just introduces confusion without any reason.
Deployment Plan
None / not applicable
Testing & Validation
None / not applicable