forked from BTrDB/btrdb-python
-
Notifications
You must be signed in to change notification settings - Fork 2
Permalink
Choose a base ref
{{ refName }}
default
Choose a head ref
{{ refName }}
default
Comparing changes
Choose two branches to see what’s changed or to start a new pull request.
If you need to, you can also or
learn more about diff comparisons.
Open a pull request
Create a new pull request by comparing changes across two branches. If you need to, you can also .
Learn more about diff comparisons here.
base repository: PingThingsIO/btrdb-python
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v5.30.2
Could not load branches
Nothing to show
Loading
Could not load tags
Nothing to show
{{ refName }}
default
Loading
...
head repository: PingThingsIO/btrdb-python
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: v5.31.0
Could not load branches
Nothing to show
Loading
Could not load tags
Nothing to show
{{ refName }}
default
Loading
- 8 commits
- 27 files changed
- 7 contributors
Commits on Jul 24, 2023
-
* Release v5.30.2 * Update btrdb/version.py * small precommit fix. --------- Co-authored-by: Justin Gilmer <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for e7f1db6 - Browse repository at this point
Copy the full SHA e7f1db6View commit details
Commits on Aug 25, 2023
-
Configuration menu - View commit details
-
Copy full SHA for 0d12d58 - Browse repository at this point
Copy the full SHA 0d12d58View commit details -
Provide option to sort the arrow tables (#47)
* Provide option to sort the arrow tables Previously, the arrow endpoints are not guaranteed to present their data in a sorted order, this PR lets the user set that for single stream calls, and sorts by time for the streamset operations by default. Streamset transformers like streamset.to_dataframe in the old version of the api used a PointBuffer that would sort the values by time before returning to the user. The single stream arrow methods now have a boolean argument `sorted` to let the user specify if they want to sort the returned table on the 'time' column or not. This is False by default. The streamset methods though, will be sorted by default and the user wont be able to switch that. We can change that later if needed. * Only sort window queries. * Update sorted parameter to sort_time.
Configuration menu - View commit details
-
Copy full SHA for be0b757 - Browse repository at this point
Copy the full SHA be0b757View commit details
Commits on Aug 30, 2023
-
Remove 4MB limit for gRPC message payloads (#49)
Currently our logic chunks data to send over gRPC according to numbers of rows of data, this led to large streamsets in the multivalue api erroring out while trying to recv this data. This PR sets the limit for the client to receive from the server to be unlimited for the time being. This will allow arbitrarly-sized streamsets to have successful multivalue queries. In the future we should update our logic to better handle these size limits when sending from the server, but this is a patch fix for now.
Configuration menu - View commit details
-
Copy full SHA for 334ed89 - Browse repository at this point
Copy the full SHA 334ed89View commit details
Commits on Sep 11, 2023
-
Update documentation for arrow methods (#50)
* Include documentation for arrow methods * Update copyright * Update broken links * Add Dash minimal example * Add page for arrow queries * Add changelog to main codebase and to api docs * Remove any broken links * Update installation documentation
Configuration menu - View commit details
-
Copy full SHA for 1783796 - Browse repository at this point
Copy the full SHA 1783796View commit details
Commits on Sep 25, 2023
-
* Threadpool executor (#22) * Release v5.15.0 * update protobuf to v4.22.3 * Add threaded streamset calls Using concurrent.futures.ThreadPoolExecutor * Blacken code * Update for failing tests * Ignore flake8 as part of testing pytest-flake8 seems to have issues with the later versions of flake8 tholo/pytest-flake8#92 * Update .gitignore * Update ignore and remove extra print. * Remove idea folder (pycharm) --------- Co-authored-by: David Konigsberg <[email protected]> Co-authored-by: Jeff Lin <[email protected]> * Threaded arrow (#23) * Release v5.15.0 * update protobuf to v4.22.3 * Add threaded streamset calls Using concurrent.futures.ThreadPoolExecutor * Blacken code * Update for failing tests * Ignore flake8 as part of testing pytest-flake8 seems to have issues with the later versions of flake8 tholo/pytest-flake8#92 * Update .gitignore * Update proto definitions. * Update endpoint to support arrow methods * Support arrow endpoints * Additional arrow updates * Update transformers, add polars conversion * Update .gitignore * Update ignore and remove extra print. * Remove idea folder (pycharm) * Update requirements.txt * Update btrdb/transformers.py * Update the way to check for arrow-enabled btrdb This has not been "turned on" yet though, since we dont know the version number this will be enabled for. The method is currently commented out, but can be re-enabled pretty easily. * Use IPC streams to send the arrow bytes for insert Instead of writing out feather files to an `io.BytesIO` stream and then sending the feather files over the wire, this creates a buffered outputstream and then sends that data back as bytes to btrdb. * Create arrow specific stream methods. * Update test conn object to support minor version * Update tests and migrate arrow code. * Arrow and standard streamset insert * Create basic arrow to dataframe transformer * Support multirawvalues, arrow transformers * Multivalue arrow queries, in progress * Update stream filter to properly filter for sampling frequency * Update arrow values queries for multivalues * Update param passing for sampling frequency * Update index passing, and ignore depth * benchmark raw values queries for arrow and current api * Add aligned windows and run func * Streamset read benchmarks (WIP) In addition: * update streamset.count to support the `precise` boolean flag. * Update mock return value for versionMajor * In progress validation of stream benchs --------- Co-authored-by: David Konigsberg <[email protected]> Co-authored-by: Jeff Lin <[email protected]> * Add 3.10 python to the testing matrix (#21) * Add 3.10 python to the testing matrix * Fix yaml parsing * Update requirements to support 3.10 * Use pip-tools `pip-compile` cli tool to generate requirements.txt files from the updated pyproject.toml file * Include pyproject.toml with basic features to support proper extra deps * Support different ways to install btrdb from pip * `btrdb, btrdb[data], btrdb[all], btrdb[testing], btrdb[ray]` * Update transformers.py to build up a numpy array when the subarrays are not the same size (number of entries) * This converts the main array's dtype to `object` * tests still pass with this change * recompile the btrdb proto files with latest protobuf and grpc plugins * Create multiple requirements.txt files for easier updating in the future as well as a locked version with pinned dependencies * Ignore protoc generated flake errors * Update test requirements * Include pre-commit and setup. * Pre-commit lints. * Update pre-commit.yaml add staging to pre-commit checks * Fix missing logging import, rerun pre-commit (#24) * Add basic doc string to endpoint object (#25) * Update benchmark scripts. * Multistream read bench insert bench (#26) * Fix multistream endpoint bugs * The streamset was passing the incorrect params to the endpoint * The endpoint does not return a `version` in its response, just `stat` and `arrowBytes` Params have been updated and a NoneType is passed around to ignore the lack of version info, which lets us use the same logic for all bytes decoding. * Add multistream benchmark methods for timesnap and no timesnap. * Add insert benchmarking methods (#27) Benchmarking methods added for: * stream inserts using tuples of time, value data * stream inserts using pyarrow tables of timestamps, value columns * streamset inserts using a dict map of streamset stream uuids, and lists of tuples of time, value data * streamset inserts using a dict map of streamset stream uuids, and pyarrow tables of timestamps, values. * Fix arrow inserts (#28) * Add insert benchmarking methods Benchmarking methods added for: * stream inserts using tuples of time, value data * stream inserts using pyarrow tables of timestamps, value columns * streamset inserts using a dict map of streamset stream uuids, and lists of tuples of time, value data * streamset inserts using a dict map of streamset stream uuids, and pyarrow tables of timestamps, values. * Include nullable false in pyarrow schema inserts * This was the only difference in the schemas between go and python. * also using a bytesIO stream to act as the sink for the ipc bytes. * Start integration test suite * Add more streamset integration tests. * Add support for authenticated requests without encryption. * Optimize logging calls (#30) Previously, the debug logging in the api would create the f-strings no matter if logging.DEBUG was the current log level or not. This can impact the performance, especially for benchmarking. Now, a cached IS_DEBUG flag is created for the stream operations, and other locations, the logger.isEnabledFor boolean is checked. Note that in the stream.py, this same function call is only executed once, and the results are cached for the rest of the logic. * Add more arrow tests and minor refactoring. * More integration test cases * Restructure tests. * Mark new failing tests as expected failures for now. * Disable gzip compression, it is very slow. * Reenable test, server has been fixed. * Update pandas testing and fix flake8 issues (#31) * Update pandas testing and fix flake8 issues * Update stream logic for unpacking arrow tables, update integration tests. * add init.py for integration tests. * Add additional tests for arrow methods vs their old api counterparts. * Add tests for timesnap boundary conditions. (#32) * Add more integration tests. * Add additional integration tests, modify the name_callable ability of the arrow_values. * remove extraneous prints. * Include retry logic. * Update statpoint order in arrow, fix some bugs with the arrow methods. * Update testing to account for NaNs. * Update github action versions. * Update tests, add in a test for duplicate values. * Remove empty test, remove extraneous prints --------- Co-authored-by: andrewchambers <[email protected]> * Update docs for arrow (#35) * Update docs, add in final enhanced edits. * Only enable arrow-endpoints when version >= 5.30 (#36) Once we have a v5.30tag of the server with arrow/multistream, we can merge this and complete the ticket. * Update arrow notes, small doc changes. (#38) * fix: patch up stream object type and other bugs (#33) * fix: patch up stream object type and other bugs * fix: resolve depth errors in stream window * fix: resolve remaining test warnings * fix: resolve test imports * chore: add pre-commit install to readme * Update staging branch with latest `master` changes (#52) --------- Co-authored-by: David Konigsberg <[email protected]> Co-authored-by: Jeff Lin <[email protected]> Co-authored-by: Andrew Chambers <[email protected]> Co-authored-by: andrewchambers <[email protected]> Co-authored-by: Taite Nazifi <[email protected]>
Configuration menu - View commit details
-
Copy full SHA for 2f32815 - Browse repository at this point
Copy the full SHA 2f32815View commit details -
Configuration menu - View commit details
-
Copy full SHA for c82f5d2 - Browse repository at this point
Copy the full SHA c82f5d2View commit details -
Configuration menu - View commit details
-
Copy full SHA for 1e9ed9a - Browse repository at this point
Copy the full SHA 1e9ed9aView commit details
Loading
This comparison is taking too long to generate.
Unfortunately it looks like we can’t render this comparison for you right now. It might be too big, or there might be something weird with your repository.
You can try running this command locally to see the comparison on your machine:
git diff v5.30.2...v5.31.0