Skip to content

fix: resolve ZeroDivisionError and NameError in OpenAICloseSetClsEvaluator.print_results#75

Open
octo-patch wants to merge 1 commit intoInternRobotics:masterfrom
octo-patch:fix/evaluator-print-results-zero-division
Open

fix: resolve ZeroDivisionError and NameError in OpenAICloseSetClsEvaluator.print_results#75
octo-patch wants to merge 1 commit intoInternRobotics:masterfrom
octo-patch:fix/evaluator-print-results-zero-division

Conversation

@octo-patch
Copy link
Copy Markdown

Problem

OpenAICloseSetClsEvaluator.print_results() in pointllm/eval/evaluator.py contains two bugs that cause crashes when total_predictions == 0 (e.g. evaluation on an empty result set):

  1. ZeroDivisionError: A stray accuracy = self.correct_predictions / self.total_predictions * 100 line was placed outside the if/else guard block (line 568 before this fix). This overwrites the accuracy = 0 safety value set in the if branch, then immediately raises ZeroDivisionError when total_predictions == 0.

  2. NameError: clean_accuracy was only assigned inside the else branch. When the if branch executes (i.e. total_predictions - invalid_responses == 0), clean_accuracy was never defined, causing NameError: name 'clean_accuracy' is not defined on the subsequent print call.

Note: the companion method save_results() already handles both variables correctly in both branches — this fix brings print_results() into alignment with that implementation.

Solution

  • Remove the duplicated accuracy assignment outside the if/else block.
  • Initialize clean_accuracy = 0 in the if branch, matching the pattern already used in save_results().

Additional fixes

  • Fix two typos: "unale to parse""unable to parse" (in OpenAICloseSetClsEvaluator.parse_gpt_response_evaluate and OpenAIObjectCaptioningEvaluator.parse_gpt_response_evaluate).

Testing

Manually verified both bugs with a minimal reproducer:

# Before fix: raises ZeroDivisionError when total_predictions == 0
# Before fix: raises NameError for clean_accuracy when if-branch executes
# After fix: returns accuracy=0, clean_accuracy=0 safely

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant