Conversation
|
Sorry, I'm AFK until Tuesday, so cannot take a close look at the moment. Let's separate making benchmarks meaningful and forcing inlining into separate PRs. The reason is that I've had a similarly looking inlining regression in See https://gitlab.haskell.org/ghc/ghc/-/issues/19557 and https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5547 Does the claim about 1000x speed up remains valid? Is it possible to setup |
Take you time, I don't expect mass adoption of ghc-9 by Tuesday 😄
They are already implemented in two separate commits, which is good enough for separating the concerns.
Yes, that claim did not use those benchmarks but instead we used legacy benchmarks that were included in old random as well. I personally didn't even use this benchmark suite for anything because I had my own repo that I used when working on random.
Possible, but it doesn't make a difference. And if it doesn't make a difference, I am not gonna waste my time on doing the work. |
7f8cc7f to
c9471d4
Compare
Benchmarks now consume the resulting generator produced by computation that is being benchmarks, thus it can no longer be optimized away by the ghc. Fixing benchmarks brings them to more realistic and consistent runtimes.
Fixed benchmarks still pointed out to regressions due to worse inliner heuristics in ghc-9 In order to workaround that problem we just need to add INLINE for all functions and just call it a day. Which is also done in this PR
Getting rid of problems with inliner we are still left with a few regressions with ghc-9, albeit not as drastic (only ~60-70%). I think it will be good to follow up on those with a separate investigation another PR.