Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SR-4590] compare_perf_tests.py fails when new benchmarks are added #47167

Closed
palimondo mannequin opened this issue Apr 14, 2017 · 5 comments
Closed

[SR-4590] compare_perf_tests.py fails when new benchmarks are added #47167

palimondo mannequin opened this issue Apr 14, 2017 · 5 comments
Assignees
Labels
bug A deviation from expected or documented behavior. Also: expected but undesirable behavior. performance

Comments

@palimondo
Copy link
Mannequin

palimondo mannequin commented Apr 14, 2017

Previous ID SR-4590
Radar None
Original Reporter @palimondo
Type Bug
Status Closed
Resolution Done
Additional Detail from JIRA
Votes 0
Component/s Project Infrastructure
Labels Bug, Performance
Assignee @moiseev
Priority Medium

md5: a66ee5743c5a5e472fe4533adb61dac8

relates to:

  • SR-4597 Benchmark results have wrong MEAN, MEDIAN and SD
  • SR-4601 Report Added and Removed Benchmarks in Performance Comparison

Issue Description:

When adding new performance tests into the benchmark suite, the compare_perf_test.py script fails with following error:

Logging results to: /Users/mondo/Developer/swift-source/build/Ninja-ReleaseAssert/swift-macosx-x86_64/benchmark/logs/master/Benchmark_Onone-20170413185239.log
Comparing master/Benchmark_O-20170412234727.log master/Benchmark_O-20170413182015.log ...
Traceback (most recent call last):
  File "/Users/mondo/Developer/swift-source/swift/benchmark/scripts/compare_perf_tests.py", line 371, in <module>
    sys.exit(main())
  File "/Users/mondo/Developer/swift-source/swift/benchmark/scripts/compare_perf_tests.py", line 154, in main
    ratio = (old_results[key] + 0.001) / (new_results[key] + 0.001)
KeyError: 'PrefixLongCRangeIter'
Comparing master/Benchmark_Onone-20170413001651.log master/Benchmark_Onone-20170413185239.log ...
Traceback (most recent call last):
  File "/Users/mondo/Developer/swift-source/swift/benchmark/scripts/compare_perf_tests.py", line 371, in <module>
    sys.exit(main())
  File "/Users/mondo/Developer/swift-source/swift/benchmark/scripts/compare_perf_tests.py", line 154, in main
    ratio = (old_results[key] + 0.001) / (new_results[key] + 0.001)
KeyError: 'PrefixLongCRangeIter’
@palimondo
Copy link
Mannequin Author

palimondo mannequin commented Apr 14, 2017

The error is caused by trying to access old_results with a key from new_results that includes new tests.

    for key in new_results.keys():
            ratio = (old_results[key] + 0.001) / (new_results[key] + 0.001)

Given that I think we should iterate over keys from old_results instead.

@belkadan
Copy link
Contributor

cc @lplarson

@palimondo
Copy link
Mannequin Author

palimondo mannequin commented Apr 14, 2017

I have a local fix that just invokes the loop with intersection of the old and new test sets. But I’m thinking
about listing the Added/Removed tests, too… Any input on desired formatting?

@palimondo
Copy link
Mannequin Author

palimondo mannequin commented Apr 15, 2017

Nah, I’ll defer that addition – it requires refactoring of the script.

@palimondo
Copy link
Mannequin Author

palimondo mannequin commented Apr 15, 2017

Fixed by astmus (JIRA User) in #8923

@swift-ci swift-ci transferred this issue from apple/swift-issues Apr 25, 2022
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug A deviation from expected or documented behavior. Also: expected but undesirable behavior. performance
Projects
None yet
Development

No branches or pull requests

1 participant