Skip to content

fix: guard against ZeroDivisionError in Comparitor._calc_stats#566

Open
haoyu-haoyu wants to merge 3 commits intoMIT-LCP:mainfrom
haoyu-haoyu:fix/evaluate-zero-division
Open

fix: guard against ZeroDivisionError in Comparitor._calc_stats#566
haoyu-haoyu wants to merge 3 commits intoMIT-LCP:mainfrom
haoyu-haoyu:fix/evaluate-zero-division

Conversation

@haoyu-haoyu
Copy link
Copy Markdown

Summary

Guard against ZeroDivisionError when comparing annotations with empty reference or test arrays.

Bug

Comparitor._calc_stats() divides tp by (tp + fn) for sensitivity and by n_test for positive predictivity without checking for zero denominators. When either annotation array is empty, this raises ZeroDivisionError.

Fix

Return float("nan") when the denominator is zero, matching the convention used by sklearn.metrics for undefined ratios.

Tests

Three module-level test functions covering all edge cases:

  • Empty reference → sensitivity=NaN, PPV=0.0
  • Empty test → sensitivity=0.0, PPV=NaN
  • Both empty → both NaN

Fixes #278

When there are no reference annotations (tp + fn == 0) or no test
annotations (n_test == 0), computing sensitivity and positive
predictivity raises ZeroDivisionError. Return NaN instead, matching
the convention used by sklearn.metrics for undefined ratios.

Fixes MIT-LCP#278
Cover three boundary scenarios that previously raised ZeroDivisionError:
- Empty reference, non-empty test → sensitivity=NaN, PPV=0.0
- Non-empty reference, empty test → sensitivity=0.0, PPV=NaN
- Both empty → both NaN
The existing test_qrs class uses lowercase naming which pytest's
default collection does not pick up. Move the new empty-annotation
tests to module level so they are actually discovered and run in CI.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

ZeroDivisionError in evaluate.py

1 participant