refactor: change r precision calculation and documentation#621
Conversation
Codecov Report
@@ Coverage Diff @@
## main #621 +/- ##
==========================================
+ Coverage 84.91% 86.58% +1.67%
==========================================
Files 133 133
Lines 6715 6717 +2
==========================================
+ Hits 5702 5816 +114
+ Misses 1013 901 -112
Flags with carried forward coverage won't be shown. Click here to find out more.
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
|
tests for the changes will be added in #617 |
| @@ -12,20 +12,25 @@ def _check_k(k): | |||
|
|
|||
|
|
|||
| def r_precision(binary_relevance: List[int], **kwargs) -> float: | |||
There was a problem hiding this comment.
there needs to be some tests testing the new behavior
There was a problem hiding this comment.
Yes I agree. Currently there are no tests for the metric functions. I have created a test which tests the behavior together with the evaluate function functions in #617 (which has to be adopted -after one of those Pars is merged), but I can also create tests which directly test the functions docarray.math here.
a2063fe to
84fda8f
Compare
|
📝 Docs are deployed on https://ft-refactor-evaluation-metrics--jina-docs.netlify.app 🎉 |
Goals:
correct the implementation of R-Precision
add hints in the documentation about potentially incorrect evaluation scores
add tests for all metric functions
add links to developer reference of metric functions in the documentation
check and update documentation, if required. See guide