Conversation
|
It looks like this pull request may not have tests. Please make sure to add tests before merging. If you need an exemption to this rule, contact Hixie on the #hackers channel in Chat (don't just cc him here, he won't see it! He's on Discord!). If you are not sure if you need tests, consider this rule of thumb: the purpose of a test is to make sure someone doesn't accidentally revert the fix. Ask yourself, is there anything in your PR that you feel it is important we not accidentally revert back to how it was before your fix? Reviewers: Read the Tree Hygiene page and make sure this patch meets those guidelines before LGTMing. |
|
@dnfield Can you run these benchmarks on the target devices? |
|
Hmm, shouldn't benchmarks be exempt from the test check robots? |
|
I'll run these on the low end device next time I'm in the office |
|
This pull request is not suitable for automatic merging in its current state.
|
|
This pull request is not suitable for automatic merging in its current state.
|
|
Related: #101518 |
|
Here are the results of doing an A/B run on these benchmarks against a pre-DlColorSource engine. For each of these results, the default, or A engine is using the DlColorSource and the local, or B engine is using SkGradient. The "Speed-up" column is how much faster Sk is against Dl (or slower if the factor is less than 1.0). Results for the "static" version of the benchmark: And the results for the "consistent" version of the benchmark: And results for the "dynamic" version of the benchmark: |
We are seeing a slight performance regression on some downstream projects on an engine commit that changed the way we store paint shaders (image, gradient, etc.) and it looks like we don't have any benchmarks to track the performance of our gradient shaders. This PR adds 3 gradient benchmarks: