3
0
Fork 0
forked from mirrors/nixpkgs
nixpkgs/pkgs/development/interpreters/python/cpython/2.7/profile-task.patch

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

22 lines
1.3 KiB
Diff
Raw Normal View History

cpython: Use --enable-optimizations, for a 16% speedup. Without this flag, the configure script prints a warning at the end, like this (reformatted): If you want a release build with all stable optimizations active (PGO, etc), please run ./configure --enable-optimizations We're doing a build to distribute to people for day-to-day use, doing things other than developing the Python interpreter. So that's certainly a release build -- we're the target audience for this recommendation. --- And, trying it out, upstream isn't kidding! I ran the standard benchmark suite that the CPython developers use for performance work, "pyperformance". Following its usage instructions: https://pyperformance.readthedocs.io/usage.html I ran the whole suite, like so: $ nix-shell -p ./result."$variant" --run ' cd $(mktemp -d); python -m venv venv; . venv/bin/activate pip install pyperformance pyperformance run -o ~/tmp/result.'"$variant"'.json ' and then examined the results with commands like: $ python -m pyperf compare_to --table -G \ ~/tmp/result.{$before,$after}.json Across all the benchmarks in the suite, the median speedup was 16%. (Meaning 1.16x faster; 14% less time). The middle half of them ranged from a 13% to a 22% speedup. Each of the 60 benchmarks in the suite got faster, by speedups ranging from 3% to 53%. --- One reason this isn't just the default to begin with is that, until recently, it made the build a lot slower. What it does is turn on profile-guided optimization, which means first build for profiling, then run some task to get a profile, then build again using the profile. And, short of further customization, the task it would use would be nearly the full test suite, which includes a lot of expensive and slow tests, and can easily take half an hour to run. Happily, in 2019 an upstream developer did the work to carefully select a more appropriate set of tests to use for the profile: https://github.com/python/cpython/commit/4e16a4a31 https://bugs.python.org/issue36044 This suite takes just 2 minutes to run. And the resulting final build is actually slightly faster than with the much longer suite, at least as measured by those standard "pyperformance" benchmarks. That work went into the 3.8 release, but the same list works great if used on older releases too. So, start passing that --enable-optimizations flag; and backport that good-for-PGO set of tests, so that we use it on all releases.
2020-03-29 22:05:04 +01:00
Backport from CPython 3.8 of a good list of tests to run for PGO.
Upstream commit:
https://github.com/python/cpython/commit/4e16a4a31
Upstream discussion:
https://bugs.python.org/issue36044
diff --git a/Makefile.pre.in b/Makefile.pre.in
index 00fdd21ce..713dc1e53 100644
--- a/Makefile.pre.in
+++ b/Makefile.pre.in
@@ -259,7 +259,7 @@ TCLTK_LIBS=
# The task to run while instrumented when building the profile-opt target.
# We exclude unittests with -x that take a rediculious amount of time to
# run in the instrumented training build or do not provide much value.
-PROFILE_TASK=-m test.regrtest --pgo -x test_asyncore test_gdb test_multiprocessing test_subprocess
+PROFILE_TASK=-m test.regrtest --pgo test_array test_base64 test_binascii test_binop test_bisect test_bytes test_bz2 test_cmath test_codecs test_collections test_complex test_dataclasses test_datetime test_decimal test_difflib test_embed test_float test_fstring test_functools test_generators test_hashlib test_heapq test_int test_itertools test_json test_long test_lzma test_math test_memoryview test_operator test_ordered_dict test_pickle test_pprint test_re test_set test_sqlite test_statistics test_struct test_tabnanny test_time test_unicode test_xml_etree test_xml_etree_c
# report files for gcov / lcov coverage report
COVERAGE_INFO= $(abs_builddir)/coverage.info