ReductStore is a high-performance, time-series object storage and streaming solution for ELT-based data acquisition (DAQ) systems in robotics and industrial IoT (IIoT). It's designed to handle large volumes of unstructured data - images, sensor readings, logs, files, ROS bags - captured in raw form and stored with a precise time index (timestamp) and optional labels (e.g. device status, AI inference). This enables fast, efficient retrieval based on when the data was collected and how it's categorized, while also allowing control over data reduction strategies by replicating (streaming) only selected data from the edge to the cloud.
For more information, please visit https://www.reduct.store/.
There are numerous time-series databases available in the market that provide remarkable functionality and scalability. However, all of them concentrate on numeric data and have limited support for unstructured data, which may be represented as strings.
On the other hand, S3-like object storage solutions could be the best place to keep blob objects, but they don't provide an API to work with data in the time domain.
There are many kinds of applications where we need to collect unstructured data such as images, high-frequency sensor data, binary packages, or huge text documents and provide access to their history. Many companies build a storage solution for these applications based on a combination of TSDB and Blob storage in-house. It might be a working solution; however, it is a challenging development task to keep data integrity in both databases, implement retention policies, and provide data access with good performance.
The ReductStore project aims to solve the problem of providing a complete solution for applications that require unstructured data to be stored and accessed at specific time intervals. It guarantees that your data will not overflow your hard disk and batches records to reduce the number of critical HTTP requests for networks with high latency.
All of these features make the database the right choice for edge computing and IoT applications if you want to avoid development costs for your in-house solution.
- Storing and accessing unstructured data as time series
- Labeling data for annotation and filtering
- JSON-based query language for filtering data
- Data replication
- Real-time FIFO bucket quota based on size to avoid disk space shortage
- Readonly replicas for horizontal scaling of read operations
- Primary\Secondary mode for high availability
The quickest way to get up and running is with our Docker image:
docker run -p 8383:8383 -v reduct-data:/data reduct/store:latest
If you prefer a bind mount instead of a Docker volume:
mkdir -p ./data
sudo chown -R 10001:10001 ./data
docker run -p 8383:8383 -v ${PWD}/data:/data reduct/store:latestAlternatively, you can opt for Cargo:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh # Install the latest Rust
apt install protobuf-compiler
cargo install reductstore
RS_DATA_PATH=./data reductstore
For a more in-depth guide, visit the Getting Started and Download sections.
After initializing the instance, dive in with one of our Client SDKs to write or retrieve data. To illustrate, here's a Python sample:
from reduct import Client, BucketSettings, QuotaType
async def main():
# 1. Create a ReductStore client
async with Client("http://localhost:8383", api_token="my-token") as client:
# 2. Get or create a bucket with 1Gb quota
bucket = await client.create_bucket(
"my-bucket",
BucketSettings(quota_type=QuotaType.FIFO, quota_size=1_000_000_000),
exist_ok=True,
)
# 3. Write some data with timestamps and labels to the 'entry-1' entry
await bucket.write("/telemetry/sensor-1", b"<Blob data>", timestamp="2024-01-01T10:00:00Z",
labels={"score": 10})
await bucket.write("/telemetry/sensor-2", b"<Blob data>", timestamp="2024-01-01T10:00:01Z",
labels={"score": 20})
# 4. Query the data by time range and condition
async for record in bucket.query("/telemetry/*",
start="2024-01-01T10:00:00Z",
stop="2024-01-01T10:00:02Z",
when={"&score": {"$gt": 20}}):
print(f"Entry name: {record.entry}")
print(f"Record timestamp: {record.timestamp}")
print(f"Record size: {record.size}")
print(await record.read_all())
# 5. Run the main function
if __name__ == "__main__":
import asyncio
asyncio.run(main())ReductStore is built with adaptability in mind. While it comes with a straightforward HTTP API that can be integrated into virtually any environment, we understand that not everyone wants to interact with the API directly. To streamline your development process and make integrations smoother, we've developed a series of client SDKs tailored for different programming languages and environments. These SDKs wrap around the core API, offering a more intuitive and language-native way to interact with ReductStore, thus accelerating your development cycle. Here are the client SDKs available:
ReductStore is not just about data storage; it's about simplifying and enhancing your data management experience. Along with its robust core features, ReductStore offers a suite of tools to streamline administration, monitoring, and optimization. Here are the key tools you can leverage:
- CLI Client - a command-line interface for direct interactions with ReductStore
- Web Console - a web interface to administrate a ReductStore instance
- ReductBridge - a data collector to get data from various sources and write it to ReductStore
Your input is invaluable to us! 🌟 If you've found a bug, have suggestions for improvements, or want to contribute directly to the codebase, here's how you can help:
- Questions and Ideas: Join our Discourse community to ask questions, share ideas, and collaborate with fellow ReductStore users.
- Bug Reports: Open an issue on our GitHub repository. Please provide as much detail as possible so we can address it effectively.
We believe in the power of community and collaboration. If you've built something amazing with ReductStore, we'd love to hear about it! Share your projects, experiences, and insights on our Discourse community.
If you find ReductStore beneficial, give us a ⭐ on our GitHub repository.
Your support fuels our passion and drives us to keep improving.
Together, let's redefine the future of blob data storage! 🚀