Raima https://raima.com/ Managing data on the edge, fog and cloud Mon, 10 Nov 2025 21:42:32 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://raima.com/wp-content/uploads/logo-rosa.png Raima https://raima.com/ 32 32 RaimaDB Replication and Hot Backup: An In-Depth Exploration https://raima.com/raimadb-replication-and-hot-backup-an-in-depth-exploration/ Mon, 10 Nov 2025 18:40:25 +0000 https://raima.com/?p=38759

Introduction

RaimaDB, an embedded database system from Raima Inc., is designed for high-performance applications across diverse platforms, including desktops, servers, embedded systems, and microcontrollers. It handles real-time data management in scenarios such as IoT devices, industrial automation, and edge computing. RaimaDB supports replication and hot backup mechanisms, which provide data redundancy, disaster recovery, and high availability without interrupting operations.

Replication in RaimaDB propagates data changes from a source database to one or more target databases, maintaining consistency across distributed systems. Hot backup creates live copies of the database while it remains operational, reducing downtime during maintenance or failover events. These features are based on RaimaDB’s storage architecture, which focuses on efficiency, cross-platform compatibility, and minimal I/O overhead.

This article covers RaimaDB’s asynchronous replication, the technologies enabling these features, and implementation approaches. It demonstrates setups using Docker containers, and details replication strategies using tools like rdm-replicate, rdm-hot-backup, and rdm-rsync. For further reading, refer to Raima’s documentation on updates by copy-on-write, snapshots, vacuuming, and replication and hot backup.

Understanding Asynchronous Replication in RaimaDB

RaimaDB supports asynchronous replication, where changes to the source database are sent to target replicas without requiring immediate synchronization. The source applies changes to its pack files, and replication or hot backup transfers extensions to pack files to the target. This approach improves performance by not blocking on remote acknowledgments, suitable for high-latency or intermittently connected environments.

Asynchronous replication prioritizes availability and throughput over strict real-time consistency between databases. Targets may lag behind the source, leading to eventual consistency, but RaimaDB’s transactional integrity ensures updates are applied atomically to avoid partial states. Benefits include lower latency for source operations, fault tolerance (source continues if targets are unavailable), and scalability for read-heavy workloads where targets offload queries.

Users must handle potential conflicts, such as divergent updates on targets, through application logic. Asynchronous mode fits backup, reporting, or geographically distributed systems, unlike synchronous replication where commits block until replicas confirm.

Foundational Technologies in RaimaDB

RaimaDB’s replication and hot backup depend on a storage engine that maintains data integrity, efficiency, and compatibility. Key elements include copy-on-write (COW) updates, pack files, vacuuming, and platform-agnostic data encoding.


Copy-On-Write (COW) Updates

RaimaDB uses a full recursive COW mechanism for database modifications. When data is updated, RaimaDB creates new copies of affected structures (e.g., B-trees for keyed data and R-trees for spatial data) instead of changing originals. This proceeds bottom-up: new items are written first, followed by parent nodes up to the main B-tree root.

COW allows snapshot isolation without read locks, supporting concurrent readers and writers. It avoids in-place updates, removing free-space management overhead and partial page writes. Benefits include:

  • – Space Efficiency: COW enables trivial support for variable-length records, whereas in-place updates make variable-length records non-trivial to implement without fragmentation.
  • – Performance Gains: Parallel transactions are supported on independent tables.
  • – Rapid Recovery: There is no difference between a crash or normal shutdown. When reopening the database, the task is to locate the last commit mark. Recovery markers every 64 KiB reduce the amount of scanning needed. A crash may lose only the last in-progress transactions, as the append-only nature ensures only content at the end of files is affected.
  • – Durability Flexibility: Sync operations apply only to the latest pack file, lowering I/O.

 

Snapshots, part of COW, reference the current B-tree root for isolated views, enabling multiversion concurrency control (MVCC) without extra structures. This supports hot backups by permitting consistent point-in-time copies during live operations.

Pack Files and Vacuuming

RaimaDB structures data into pack files, append-only files with a maximum size slightly under 2 GiB. Updates append to the end of the current pack file, referencing prior files without mid-file changes. This ensures sequential I/O, improving performance on SSDs and mechanical drives.

Stale data builds up over time, managed by vacuuming—a garbage collection process that traverses structures, moves active content to the latest pack file, and updates references. Vacuuming processes the oldest pack file first: if unreferenced, it is deleted; otherwise, data is clustered by key for better access. It runs without write locks on readers but briefly pauses updates to the vacuumed table.

Benefits of vacuuming include:

  • – Storage Optimization: Lowers disk footprint by consolidating and deleting obsolete files.
  • – Device Longevity: Reduces SSD wear through efficient garbage collection.
  • – I/O Efficiency: Clustered data boosts cache hit rates and sequential reads.

Pack file limits need consideration for replication: targets on systems with smaller file size constraints may fail if source packs exceed 2 GiB, but the reverse poses no issue.

Low-Level Data Formats for Cross-Platform Compatibility

RaimaDB’s on-disk format supports replication across platforms like Windows, macOS, Linux, RTOS-based embedded systems, and bare-metal microcontrollers. Data types are encoded in a platform-agnostic way:

  • – Integers: Variable-length encoding for endian-independent storage; fixed-size cases use network byte order (big-endian).
  • – Floating-Point Numbers: IEEE-754 format in network byte order, except on microcontrollers with limited FPU support, where custom handling applies.
  • – Strings: UTF-8 encoding internally and on-disk; Windows CP1252 is API-optional but converted to UTF-8 for storage.
  • – Binary Data: Byte-aligned units, with no endian or alignment issues.
  • – Encryption: Uniform algorithms across platforms ensure decryptable replicas.

Locale-dependent string sorting (collation) requires care: indexes on collated strings risk corruption across platform versions or locale updates. Avoid collated indexes for replicated databases to prevent inconsistencies.

As long as IEEE-754 is supported and there are no locale issues, the target can handle receiving data, producing byte-for-byte identical replicas.

Setting Up with Docker

To demonstrate replication and hot backup, use Docker for reproducible environments. Raima provides an official image on Docker Hub: raima/rdm. Pull and run it as follows:

				
					$ docker pull raimadb/rdm
Using default tag: latest
latest: Pulling from raimadb/rdm
Digest: sha256:343e56b54ecdf0214b80cbaf57fa72edfc7f8be28b5ed5c236754ff5296645c5
Status: Image is up to date for raimadb/rdm:latest
docker.io/raimadb/rdm:latest
$ docker run --hostname rdm -it raimadb/rdm
Welcome to the RDM License Request and Shell script!
Please fill out the following form to request a license for running RDM
in a container. If you have filled out this form within the past 90 days,
you can skip this by hitting enter when asked for the full name.
Enter your Full Name (leave blank to skip): John Doe
…
Form submitted successfully! You have a license for 90 days.
Please note that the RDM product may have a time-bomb that will expire
after a certain date. Please also note that this is not the same as a license
expiration. Make sure you have registered for a license as previously
instructed.
Checking whether a time-bomb has been triggered by executing rdm-compile...
The time-bomb has not been triggered!
This product expire on: 2026-05-05
Point your browser at:
    http://172.17.0.2
to view the documentation
Dropping you into a bash shell...
ubuntu@rdm:~/GettingStarted$ cd c-core/core20Example/
ubuntu@rdm:~/GettingStarted/c-core/core20Example$ mkdir build
ubuntu@rdm:~/GettingStarted/c-core/core20Example$ cd build
ubuntu@rdm:~/GettingStarted/c-core/core20Example/build$ cmake ..
-- The C compiler identification is GNU 13.3.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Found RDM in: /opt/Raima/rdm_enterprise_enc_simple-16.1.0.808/lib/cmake/RDM
-- Configuring done (0.2s)
-- Generating done (0.0s)
-- Build files have been written to: /home/ubuntu/GettingStarted/c-core/core20Example/build
ubuntu@rdm:~/GettingStarted/c-core/core20Example/build$ make
[ 20%] Compile and embed schema for core20.sdl for C struct definition API
[ 40%] Building C object CMakeFiles/core20Example.dir/core20Example_main.c.o
[ 60%] Building C object CMakeFiles/core20Example.dir/example_fcns.c.o
[ 80%] Building C object CMakeFiles/core20Example.dir/core20_cat.c.o
[100%] Linking C executable core20Example
[100%] Built target core20Example
ubuntu@rdm:~/GettingStarted/c-core/core20Example/build$ ./core20Example -h
Usage:
  core20Example [OPTION]...
Example that demonstrates server-side, client-side, and embedded TFS continuously inserting and/or reading rows
Short options can be combined into one string starting with a single '-'.
Mandatory arguments to long options are mandatory for short options too.
Long option arguments can also be specified in a separate argument.
  -h, --help Display this usage information
  -n, --no-fln-output Don't print file and line numbers for output messages
  -s, --server Use the server-side TFS
  -c, --client Use the client-side TFS
  -r, --reader-only Read rows
  -w, --writer-only Write rows
  -m, --sleep-ms=MSECS
                             Sleep MSECS milliseconds between iterations
  -i, --iterations=COUNT Run COUNT iterations
ubuntu@rdm:~/GettingStarted/c-core/core20Example/build$ ./core20Example --iterations=20 --sleep-millisecs 10
Read: 19 at /home/ubuntu/GettingStarted/c-core/core20Example/core20Example_main.c:134
ubuntu@rdm:~/GettingStarted/c-core/core20Example/build$
				
			

When running, the container prompts for license details (skippable if recent). It shows a URL (e.g., http://172.17.0.2) for browser-based documentation access. Each example, like core20Example, has dedicated pages. The container allows VS Code attachment for development, detailed in the main documentation (not covered here).

The core20Example is a simple application showing continuous row insertion and reading via RaimaDB’s Transactional File Server (TFS). It is agnostic to data organization, ideal for illustrating replication and hot backup without complex application logic.

For multi-container scenarios, build a custom image extending raimadb/rdm. This includes pre-built tools (rdm-hot-backup, core20Example, rdm-rsync) and SSH/rsync for inter-container communication.


Custom Dockerfile Explanation

The Dockerfile creates a hybrid build-and-run environment for demonstration of the approaches that follows next:

				
					FROM raimadb/rdm
# Build rdm-hot-backup as ubuntu (configure and build steps, without preset)
RUN cd /home/ubuntu/GettingStarted/rdm-hot-backup && \
    cmake -S . -B build -DCMAKE_BUILD_TYPE=Release && \
    cmake --build build && \
    cd /home/ubuntu/GettingStarted/c-core/core20Example && \
    cmake -S . -B build -DCMAKE_BUILD_TYPE=Release && \
    cmake --build build
USER root
# Install required packages
RUN apt-get update && \
    apt-get install -y openssh-server rsync inotify-tools
# Set up sudo for ubuntu user to start sshd and generate keys without password
RUN echo "ubuntu ALL=(ALL) NOPASSWD: /usr/sbin/sshd, /usr/bin/ssh-keygen" > /etc/sudoers.d/ubuntu-sshd && \
    chmod 0440 /etc/sudoers.d/ubuntu-sshd
# Set up SSH server runtime directory
RUN mkdir /var/run/sshd
# Create private/public key pair for ubuntu user
RUN mkdir -p /home/ubuntu/.ssh && \
    ssh-keygen -t rsa -b 2048 -f /home/ubuntu/.ssh/id_rsa -N "" && \
    cat /home/ubuntu/.ssh/id_rsa.pub >> /home/ubuntu/.ssh/authorized_keys && \
    chmod 700 /home/ubuntu/.ssh && \
    chmod 600 /home/ubuntu/.ssh/authorized_keys && \
    chmod 600 /home/ubuntu/.ssh/id_rsa && \
    chown -R ubuntu:ubuntu /home/ubuntu/.ssh
# Configure SSH client for passwordless, promptless connections (disable host key checking)
RUN echo "Host *" > /home/ubuntu/.ssh/config && \
    echo " StrictHostKeyChecking no" >> /home/ubuntu/.ssh/config && \
    echo " UserKnownHostsFile /dev/null" >> /home/ubuntu/.ssh/config && \
    echo " LogLevel ERROR" >> /home/ubuntu/.ssh/config && \
    chown ubuntu:ubuntu /home/ubuntu/.ssh/config && \
    chmod 600 /home/ubuntu/.ssh/config
# Copy rdm-rsync, rdm-hot-backup, and core20Example binaries to Raima installation
RUN RDM_DIR=$(ls /opt/Raima | grep -E -- '-[1-9][0-9]\.[0-9]$') && \
    cp /home/ubuntu/GettingStarted/rdm-rsync/rdm-rsync /opt/Raima/"$RDM_DIR"/bin/rdm-rsync && \
    chmod +x /opt/Raima/"$RDM_DIR"/bin/rdm-rsync && \
    cp /home/ubuntu/GettingStarted/rdm-hot-backup/build/rdm-hot-backup /opt/Raima/"$RDM_DIR"/bin/. && \
    cp /home/ubuntu/GettingStarted/c-core/core20Example/build/core20Example /opt/Raima/"$RDM_DIR"/bin/.
# Create startup script to generate host keys, start sshd (with sudo), and then the original command from the correct working directory
RUN echo '#!/bin/bash' > /usr/local/bin/start_sshd_and_shell.sh && \
    echo 'sudo ssh-keygen -A' >> /usr/local/bin/start_sshd_and_shell.sh && \
    echo 'sudo /usr/sbin/sshd -D &' >> /usr/local/bin/start_sshd_and_shell.sh && \
    echo 'cd /home/ubuntu/GettingStarted' >> /usr/local/bin/start_sshd_and_shell.sh && \
    echo '/usr/local/bin/request_license_and_shell.pl' >> /usr/local/bin/start_sshd_and_shell.sh
RUN chmod +x /usr/local/bin/start_sshd_and_shell.sh
# Switch back to ubuntu user and set working directory to match base image
USER ubuntu
WORKDIR /home/ubuntu/GettingStarted
# Set the new CMD
CMD ["/usr/local/bin/start_sshd_and_shell.sh"]
				
			

Explanation:

  • – Base Image: Extends raima/rdm with pre-built examples (rdm-hot-backup, core20Example).
  • – Dependencies: Installs openssh-server, rsync, and inotify-tools for secure inter-container data transfer and file monitoring.
  • – SSH Setup: Configures passwordless SSH with key generation, sudo privileges for sshd, and strict host checking disabled for demo simplicity.
  • – Binary Placement: Copies tools to RaimaDB’s bin directory for PATH access.
  • – Startup Script: Generates host keys, starts SSH daemon in background, and invokes the license/shell script.
  • User and Workdir: Reverts to ubuntu user and sets working directory for consistency.

 

Build the image:

				
					$ docker pull raimadb/rdm
…
$ docker build -t my-rdm my-rdm
…
				
			

Note: This image combines build and runtime for educational purposes. In production, separate build stages from runtime to minimize image size and security risks, especially for custom applications.

Approaches to Replication and Hot Backup

All approaches rely on the Docker image built above and produce byte-identical target replicas. Some allow live reads on targets during replication (hot replicas), while others create offline backups for failover. Linux is fully supported; examples use the custom my-rdm image.


Approach 1: Low-Level RaimaDB API

The base method could be made usinguses RaimaDB’s replication API to extract changes from a source and apply them to a target. This API can handle data serialization and transport, but applications can manage transport themselves, typically for protocols not supported by RaimaDB. It fits embedded systems without standard networking or custom integrations.

For convenience, Raima provides rdm-replicate, a command-line wrapper around this API, included in enterprise packages. It manages TFS connections for source and target, enabling cross-process/machine replication without custom code. Use cases include initial seeding or ongoing replication where read access to the target is required. Details are skipped here; see subsequent approaches for practical usage.


Approach 2: Using rdm-replicate Command-Line Tool

rdm-replicate enables asynchronous replication via TFS. Demonstrate with core20Example in two containers.  


The following code with light background is for the source container and dark background for the target container.

Start source container:

				
					$ docker run --hostname source --name source --rm -it my-rdm
Welcome to the RDM License Request and Shell script!
…
    http://172.17.0.2
…
ubuntu@source:~/GettingStarted$ cd ..
ubuntu@source:~$ core20Example --iterations 1
Read: 0 at /home/ubuntu/GettingStarted/c-core/core20Example/core20Example_main.c:148
ubuntu@source:~$
				
			

Start target container (separate terminal):

				
					$ docker run --hostname target --name target --rm -it my-rdm
Welcome to the RDM License Request and Shell script!
…
    http://172.17.0.3
…
ubuntu@target:~/GettingStarted$ cd ..
				
			

In source:

				
					ubuntu@source:~$ rdm-rsync --copy --force core20 172.17.0.3:core20
Master SSH connection is down. Attempting to restart...
Master restarted successfully.
sending incremental file list
./
p00000001.pack
sent 820 bytes received 38 bytes 1,716.00 bytes/sec
total size is 1,440 speedup is 1.68
ubuntu@source:~$ core20Example --server --writer-only
Inserted: 27 at /home/ubuntu/GettingStarted/c-core/core20Example/core20Example_main.c:212
				
			

In target:

				
					ubuntu@target:~$ core20Example --server --reader-only
Read: 0 at /home/ubuntu/GettingStarted/c-core/core20Example/core20Example_main.c:148


				
			

In source (third terminal)

				
					$ docker exec -it source bash
ubuntu@source:~/GettingStarted$ rdm-replicate tfs://localhost/core20 tfs://172.17.0.3/core20
Database Replicate Utility
RaimaDB 16.1.0 Build 812 [11-7-2025] https://www.raima.com/
Copyright (c) 2024 Raima Inc., All rights reserved.
*** EVALUATION COPY ONLY (not for release) Contact sales@raima.com. ***
				
			

Updates from source propagate to target, observable in application output. Targets support live reads.

Variation: Standalone TFS

Replace built-in TFS with standalone rdm-tfs for both containers. Start rdm-tfs before core20Example –client. The –client option must be used instead of –server; this makes core20Example connect to the TFS rather than use a built-in TFS (which can also accept remote connections). Using a built-in TFS eliminates the need for transport between core20Example and the TFS. However, with a built-in TFS, the database is unavailable to other clients (such as rdm-replicate) when the example is not running. This creates a trade-off between performance and uptime.

No initial rdm-rsync is needed if TFS initializes empty databases. Replication order: start TFS on source and target, start writer on source, run rdm-replicate, then start reader on target.

Approach 3: Using rdm-hot-backup for Hot Backup

rdm-hot-backup supports live backups by monitoring file system events (via Linux inotify) and mirroring changes to a target directory. It first copies the source, then applies appends, creations, and deletions in pack file order.

With this approach, you may need to change some system settings to get inotify to work properly, especially if other software (e.g., VS Code with large projects) is consuming resources. Increase limits as follows on the host where Docker is being run:

				
					$ sysctl fs.inotify.max_user_watches fs.inotify.max_user_instances
fs.inotify.max_user_watches = 65536
fs.inotify.max_user_instances = 128
$ sudo sysctl -w fs.inotify.max_user_watches=524288
fs.inotify.max_user_watches = 524288
$ sudo sysctl -w fs.inotify.max_user_instances=1024
fs.inotify.max_user_instances = 1024
$ echo "fs.inotify.max_user_watches=524288" | sudo tee -a /etc/sysctl.conf
fs.inotify.max_user_watches=524288
$ echo "fs.inotify.max_user_instances=1024" | sudo tee -a /etc/sysctl.conf
fs.inotify.max_user_instances=1024
$ sudo sysctl -p
fs.inotify.max_user_watches = 524288
fs.inotify.max_user_instances = 1024
				
			

Start a single container and first issue these commands:

				
					$ docker run --hostname single --name single --rm -it my-rdm
...
    http://172.17.0.2
...
ubuntu@single:~/GettingStarted$ cd ..
ubuntu@single:~$ core20Example --writer-only
Inserted: 133 at /home/ubuntu/GettingStarted/c-core/core20Example/core20Example_main.c:212
				
			

In another terminal:

				
					$ docker exec -it single bash
ubuntu@single:~/GettingStarted$ cd ..
ubuntu@single:~$ rdm-hot-backup core20 core20-copy
^C
ubuntu@single:~$ ls -l core20.rdm core20-copy.rdm
core20-copy.rdm:
total 36
-rw-r--r-- 1 ubuntu ubuntu 33536 Nov 7 17:31 p00000001.pack
core20.rdm:
total 36
-rw-r--r-- 1 ubuntu ubuntu 33936 Nov 7 17:31 p00000001.pack
ubuntu@single:~$ rdm-hot-backup core20 core20-copy
^C
ubuntu@single:~$ ls -l core20.rdm core20-copy.rdm
core20-copy.rdm:
total 40
-rw-r--r-- 1 ubuntu ubuntu 40384 Nov 7 17:32 p00000001.pack
core20.rdm:
total 44
-rw-r--r-- 1 ubuntu ubuntu 40784 Nov 7 17:32 p00000001.pack
				
			

The above shows that the hot backup can be interrupted at any time, and progress can be observed with the ls command line tool.

Requirements:

  • – Source on local filesystem (no NFS).
  • – Tool must match RaimaDB’s pace to avoid inconsistencies.
  • – Pack files copied ascending; appends mirrored exactly.
  • – Sufficient kernel memory for inotify watches (see above limits).


Advantages: Simple, efficient (blocks on idle), low engine interference. Disadvantages: Target unusable during backup. Transactions that have not been completely copied will be ignored.

Integrates with COW by respecting pack file finalization. Resumable on restart with pack overlap.

Approach 4: Custom File System Monitoring or rsync-Based Solutions

For non-Linux or advanced needs, develop custom tools using OS-specific APIs (e.g., Windows ReadDirectoryChangesW). Follow pack file rules: ascending copies, synchronized appends/creations.

Alternatively, use rdm-rsync (included in the custom image) for initial/seeded copies, combined with periodic snapshots. This suits offline backups or hybrid setups, ensuring byte-identical targets for failover. Not live-readable during sync but efficient for periodic replication.

rdm-rsync is a script that uses rsync with file watches (via inotify) to monitor changes and perform incremental syncs.

Example:

Start source container:

				
					$ docker run --hostname source --name source --rm -it my-rdm
...
    http://172.17.0.2
...
ubuntu@source:~/GettingStarted$ cd ..
ubuntu@source:~$ core20Example --writer-only
Inserted: 73 at /home/ubuntu/GettingStarted/c-core/core20Example/core20Example_main.c
				
			

In a second terminal:

				
					$ docker run --hostname target --name target --rm -it my-rdm
...
    http://172.17.0.3
...
ubuntu@target:~/GettingStarted$ cd ..
ubuntu@target:~$ ls -l core20.rdm
ls: cannot access 'core20.rdm': No such file or directory
ubuntu@target:~$
				
			

Since no copying has started, the ls command shows an error. Repeat later to observe content being copied.

In a third terminal, exec into source:

				
					$ docker exec -it source bash
ubuntu@source:~/GettingStarted$ cd ..
ubuntu@source:~$ rdm-rsync core20 172.17.0.3:core20
Master SSH connection is down. Attempting to restart...
Master restarted successfully.
Setting up watches.
Watches established.
sending incremental file list
./
p00000001.pack
sent 2,994 bytes received 38 bytes 6,064.00 bytes/sec
total size is 17,344 speedup is 5.72
sending incremental file list
p00000001.pack
sent 311 bytes received 35 bytes 692.00 bytes/sec
total size is 17,760 speedup is 51.33
sending incremental file list
p00000001.pack
sent 318 bytes received 35 bytes 706.00 bytes/sec
total size is 18,224 speedup is 51.63
sending incremental file list
p00000001.pack
...
				
			

In summary, RaimaDB’s approaches address diverse requirements, from API-driven custom integrations to turnkey tools, all using its core architecture for reliable, cross-platform data management.

]]>
Why Do Tech CEOs Need RaimaDB? https://raima.com/why-do-tech-ceos-need-raimadb/ Tue, 02 Sep 2025 13:42:42 +0000 https://raima.com/?p=38659 1. Why do we need RaimaDB now?

Tech CEOs are navigating unprecedented complexity: the explosion of data at the edge, strict regulatory demands, and the race to deliver AI- and IoT-powered innovation without compromising speed, reliability, or safety.

RaimaDB is the only embedded database designed to thrive in these conditions—lightweight, high-performance, certified-ready, and proven in mission-critical industries like aerospace, industrial automation, and medical devices.

Here are the key reasons why RaimaDB is essential for CEOs now:

  • Accelerate Innovation: RaimaDB enables rapid prototyping and deployment of next-generation products by combining SQL standards with in-memory performance and flexible APIs (C/C++, Python, .NET, Java).

  • Strategic Edge Advantage: With built-in support for edge and real-time environments, RaimaDB helps companies capture, analyze, and act on data where it’s generated—reducing latency and cloud costs.

  • Risk Mitigation: Certified for safety-critical environments (e.g., IEC 61508, IEC 62304, IEC 60880, DO-178C), RaimaDB reduces compliance risk and accelerates regulatory approval.

  • Operational Efficiency: Its zero-administration design and tiny footprint mean reduced infrastructure costs and fewer resources spent managing databases—freeing your teams to focus on product innovation.

  • Market Differentiation: Tech leaders using RaimaDB can promise customers faster, safer, and more reliable experiences, securing long-term competitive advantage.

  • Future-Proof Success: With over 40 years of innovation and ongoing certification efforts, RaimaDB is built to support evolving technology landscapes, from AI at the edge to cloud-native deployments.

2. What business impact does RaimaDB deliver?

RaimaDB creates measurable impact across multiple CEO priorities:

  • Improves Efficiencies: Reduces operational overhead with a zero-admin, embedded architecture. Enables engineering teams to build faster without expanding headcount.

  • Increases Revenue: Shortens time-to-market for new data-driven features, enhances customer satisfaction, and supports premium product differentiation in regulated industries.

  • Mission Critical: Powers decisions in real time—whether in autonomous vehicles, industrial robots, or medical devices—where downtime or errors are unacceptable.

3. Is the investment justified?

While RaimaDB may not be in the initial budget, it offsets costs across multiple areas:

  • Cuts infrastructure and cloud expenses by processing data locally.

  • Replaces the need for heavy, complex database admin headcount.

  • Reduces time and expense in certification processes by leveraging RaimaDB’s safety-critical readiness.

The ROI comes from faster innovation cycles, lower cost-to-serve, and stronger positioning in mission-critical markets.

4. How does RaimaDB align with CEO objectives?

For companies under $5M revenue:

  • Build repeatable customer acquisition by differentiating with fast, reliable data features.

  • Increase product traction by leveraging edge-ready and certifiable technology.

  • Access capital more effectively by showcasing compliance readiness and market differentiation.

For companies over $5M revenue:

  • Amplify CEO leadership with a product foundation trusted in aerospace, medical, and defense.

  • Advance product and services strategy with a database that scales from embedded devices to enterprise systems.

  • Monetize emerging tech like IoT, AI, and robotics by delivering safe, real-time data-driven services.

5. How is RaimaDB different from alternatives?

While other databases offer partial solutions (open-source embedded engines, heavy enterprise RDBMS, or cloud-only services), RaimaDB uniquely combines:

  • Embedded footprint (runs in <1MB memory).

  • High performance (in-memory, real-time transactions).

  • Full SQL & ACID compliance (standards-based reliability).

  • Certification-ready design for safety-critical industries.

  • Proven heritage with 25,000+ deployments worldwide.

6. Implementation: What are the requirements?

Minimal. RaimaDB is designed for straightforward integration by engineering teams, with:

  • Native APIs across multiple languages.

  • Drop-in deployment on diverse hardware and OS platforms.

  • Scalable support model from prototype to production.

Most customers start with evaluation and integration in days, not months—accelerating time-to-market without overwhelming internal teams.

👉 The right option, for the short term and the long term:

  • Do-it-yourself: Time-intensive, lacks certifications, inconsistent performance.

  • Outsourced solutions: High cost, lack of flexibility, certification hurdles.

  • RaimaDB: Certified-ready, embedded, efficient, and future-proof.

]]>
RaimaDB: The Versatile Embedded, In-Memory, and Real-Time Database for IoT and Sensor Data https://raima.com/raimadb-the-versatile-embedded-in-memory-and-real-time-database-for-iot-and-sensor-data/ Wed, 23 Oct 2024 19:06:40 +0000 https://raima.com/?p=38340 In the world of databases, versatility is key, and RaimaDB exemplifies this perfectly. RaimaDB isn’t just an embedded database or an in-memory system; it’s also a distributed, real-time, highly available database optimized for IoT, sensor data, and much more. With a broad range of capabilities, RaimaDB adapts to various database types, from SQL-compliant relational models to highly scalable distributed systems.

In this article, we’ll explore the numerous database categories RaimaDB identifies with and examine the specific features that allow it to excel in each. Whether you need a transaction-processing OLTP system, a fault-tolerant database, or a real-time analytics solution, RaimaDB demonstrates that being multifaceted doesn’t mean sacrificing performance or specialization. Join us as we uncover how one database can fit so many roles in the modern data landscape.

 

Relational Database Management System (RDBMS): RaimaDB supports ANSI SQL-2012 and provides ACID-compliant transactions, ensuring data integrity and consistency. With features like R tree indexing and SQL triggers, RaimaDB efficiently models relational data, storing it in tables while managing relationships between data points, ideal for embedded environments

Time Series Database (TSDB): RaimaDB handles time-stamped data with its time series capabilities, optimized for IoT sensor data. Its real-time data processing and efficient in-memory operations make it suitable for continuous time-based querying and storage, allowing applications to monitor sensor readings and logs in real-time

Embedded Database: Specifically designed for embedded systems, RaimaDB has a small footprint and minimal resource requirements. It integrates directly into applications, operating in-process with low memory and CPU usage. This allows it to run efficiently on resource-constrained devices, such as IoT sensors, industrial systems, and automotive ECU’s

Network Database: RaimaDB incorporates a network model with direct relationships (owner/member relationships) through “set” structures, allowing data to be organized hierarchically in a non-relational format. This hybrid model combines relational and network features, making it flexible for embedded applications with complex data relationships

Multimodel Database: RaimaDB supports both relational and network models, offering flexibility in data representation. Its ability to automatically switch between these models allows it to manage data in different formats within the same application, ideal for diverse use cases in industrial and IoT environments

In-Memory Database: RaimaDB offers the option to run fully in-memory, significantly reducing latency for real-time applications. It supports hybrid configurations where in-memory and disk-based storage are used together, ensuring both speed and durability in applications requiring fast access and persistent storage

Distributed Database: With replication, synchronization and unions across distributed nodes, RaimaDB ensures that data is consistent and available simultaneously across multiple locations. This feature makes it ideal for distributed IoT networks and industrial systems where real-time data sharing is critical

Cloud Database: While primarily an embedded solution, RaimaDB can be deployed in cloud environments for lightweight, distributed applications. Its ability to handle data processing across different nodes makes it an adaptable database for cloud-based IoT deployments

Real-Time Database: RaimaDB provides deterministic response times, essential for real-time systems. It supports ACID-compliant transactions and efficient in-memory processing, making it suitable for mission-critical applications like industrial automation where timing is crucial

Spatial Database: While RaimaDB is not primarily designed for spatial data, its flexibility in managing sensor inputs and location-based data allows it to support spatial queries in IoT and location-driven applications, where geographic information is key

Mobile Database: RaimaDB’s small footprint and low memory usage make it a strong choice for mobile applications. It is optimized for mobile environments with limited resources, enabling efficient local storage and processing in mobile devices and applications

Edge Database: RaimaDB is ideal for edge computing, where data is processed locally at the edge of the network, close to the data source. Its low-latency data processing and real-time capabilities ensure that edge devices can make quick decisions without relying on centralized cloud infrastructure

Persistent Memory Database: RaimaDB supports persistent storage modes that ensure data durability even in the event of a power failure. It can preload in-memory data from disk, ensuring that important data is retained across sessions and write back out on demand once the user is done

Transaction Processing Database (OLTP): RaimaDB’s support for ACID transactions, multi-version concurrency control (MVCC), and efficient transaction handling make it ideal for OLTP systems. It ensures data integrity and reliability during real-time transaction processing, making it suitable for embedded environments

Real-Time Analytics Database: RaimaDB’s ability to process sensor and time-series data in real time makes it highly suited for real-time analytics in IoT environments. Its in-memory processing and time-based querying allow continuous data analysis, crucial for applications that require immediate insights from sensor data

SQL Database: RaimaDB supports ANSI SQL-2012, with full SQL capabilities, including stored procedures and triggers. It also provides extensions such as SQL/PL and native APIs, making it a versatile SQL database for embedded applications

Enterprise Database: RaimaDB’s ACID compliance, data replication, and fault tolerance make it suitable for smaller enterprise use cases, especially in decentralized, embedded, or industrial systems that require reliable, scalable data management

Sensor Data Database: RaimaDB is optimized for managing high volumes of sensor data in real time. It processes IoT data streams efficiently, providing rapid read/write access and supporting time-series storage, ideal for sensor-heavy industrial and IoT applications

Highly Available Database: RaimaDB supports replication and synchronization to ensure high availability, making it resilient to failures. Its ability to replicate data across nodes guarantees uptime and data consistency, which is critical for mission-critical applications

Write-Optimized Database: RaimaDB’s multi-version concurrency control (MVCC) and efficient write-handling capabilities make it optimized for high-frequency write operations. This feature is key for real-time systems that require constant data updates, such as IoT devices and logging systems

Read-Optimized Database: With B+ tree, R-tree and AVL tree indexing, RaimaDB ensures fast data retrieval, even in resource-constrained environments. This read optimization makes it ideal for applications where querying speed is crucial, especially in industrial systems

Operational Database: RaimaDB’s real-time transaction processing, efficient data management, and concurrency control make it suitable for operational workloads in embedded and industrial applications, ensuring fast and reliable access to operational data

On-Premises Database: RaimaDB can be deployed on-premises, providing local data storage and processing without relying on external connectivity. This makes it suitable for industrial and IoT environments where real-time, on-site data access is essential

File-Based Database: RaimaDB stores data using a compact, file-based storage model. It includes options for further LZMA compression, ensuring that disk usage is minimized while maintaining efficient access, making it ideal for embedded systems

IoT (Internet of Things) Database: RaimaDB is built for IoT applications, with features like real-time data handling, time-series storage, and distributed processing. It allows for efficient data management and processing in IoT networks, handling large volumes of sensor data with low latency

Fault-Tolerant Database: RaimaDB’s built-in replication and recovery mechanisms ensure fault tolerance, allowing it to continue operating even during hardware or network failures. This makes it suitable for critical systems where uptime is essential

Mobile Application Database: RaimaDB’s small footprint, low resource usage, and ability to run in-process make it well-suited for mobile applications, where efficient data management on constrained devices is required

Encrypted Database: RaimaDB provides database and table level AES encryption (128, 192, 256-bit) for secure data storage and communication, making it ideal for applications that require data security, such as in financial, healthcare, or industrial settings

Memory-First Database: RaimaDB’s ability to operate fully in-memory while offering hybrid configurations with persistent storage makes it ideal for applications that require fast access and data durability, such as real-time processing systems

Stateful Database: With ACID-compliant transactions and concurrency control, RaimaDB maintains a consistent state across all operations, ensuring reliable data handling for embedded systems that require persistent data across sessions

Soft Real-Time Database: RaimaDB provides real-time data access with flexible timing constraints, suitable for applications where data needs to be processed in a timely manner, but strict deadlines are not critical

Strict Consistency Database: RaimaDB ensures strict consistency across distributed environments using ACID-compliant transactions and multi-version concurrency control, making it suitable for systems where data integrity is paramount

Highly Scalable Database: RaimaDB’s distributed architecture, replication, and synchronization capabilities allow it to scale across multiple nodes, making it well-suited for large IoT networks or edge computing environments that require high scalability

 

Are you looking for a database like those mentioned above? Download a free trial of RaimaDB and receive complimentary engineering consulting for your evaluation.

]]>
Harnessing the Power of RaimaDB with Python https://raima.com/harnessing-the-power-of-raimadb-with-python/ Mon, 19 Aug 2024 16:57:31 +0000 https://raima.com/?p=38299 We’ve seen more and more interest from Python developers to use RaimaDB in their development projects that are mission critical and need a fast and reliable database. Here, we explore how the RaimaDB community has successfully utilized three tools — Cython, ODBC, and C-to-Python converters—to use RaimaDB in their Python applications.

 

1. Cython: Bridging Python and C for High Performance

Cython allows developers to write C extensions for Python in a straightforward manner, offering a significant speedup for critical code sections.  Our community has seen it as effective in handling RaimaDB operations. By embedding C code directly within Python, Cython enables seamless and efficient data manipulation and querying, making it an excellent tool for applications where performance is critical.

Key Benefits:

  • Performance: Cython compiles Python code to C, providing a substantial performance boost.
  • Ease of Use: Developers can write Python-like syntax and achieve C-level efficiency.
  • Integration: Directly integrates with existing Python and C codebases.

Use Cases:

  • Real-time data processing applications.
  • Systems requiring extensive database interactions.
  • Applications where execution speed is crucial.

 

2. ODBC: Universal Connectivity with PyODBC

ODBC (Open Database Connectivity) provides a universal interface to database systems, and PyODBC is the Pythonic way to tap into this power. By using PyODBC, developers can execute SQL commands through RaimaDB, facilitating a wide range of data operations from simple queries to complex transactions. This method is particularly beneficial for applications that require interoperability.

Key Benefits:

  • Flexibility: Connects Python applications to any database that supports ODBC.
  • Standardization: Utilizes SQL standards for database interactions.
  • Compatibility: Works across various operating systems and database systems.

Use Cases:

  • Multi-database environments.
  • Legacy systems requiring integration with modern Python applications.
  • Cross-platform database applications.

 

3. Converting C to Python: Enhancing Code Reusability with Shiboken and Cppyy

For software teams with a substantial codebase in C, converting this code to Python can significantly accelerate development times and enhance maintainability. Tools like Shiboken and Cppyy automate the conversion of C code into Python modules, allowing Python programs to call C functions as if they were native Python. This approach not only improves code reusability but also maintains the performance benefits of C.

Key Benefits:

  • Code Reusability: Leverages existing C code within Python applications.
  • Maintainability: Simplifies the codebase by using Python’s cleaner syntax.
  • Efficiency: Maintains high performance of C code within Python applications.

Use Cases:

  • Integrating legacy C codebases with modern Python frameworks.
  • Developing high-performance Python modules with existing C solutions.
  • Facilitating the migration of large-scale, performance-sensitive applications to Python.

 

Conclusion

Integrating RaimaDB with Python using Cython, ODBC with PyODBC, or C-to-Python conversion tools offers substantial benefits for software engineers looking to enhance the performance and scalability of their applications. Each method provides unique advantages and caters to different needs, from performance enhancement with Cython, flexibility with PyODBC, to reusability with C-to-Python converters. As the RaimaDB community continues to grow, these tools will undoubtedly play a crucial role in shaping efficient, robust database solutions in the Python ecosystem.
Are you interested in testing RaimaDB for your applicaiton? Download a free trial of RaimaDB here.

]]>
RaimaDB and QNX Forge a Powerful Partnership in Embedded Systems https://raima.com/raimadb-and-qnx-forge-a-powerful-partnership-in-embedded-systems/ Tue, 09 Jul 2024 14:28:05 +0000 https://raima.com/?p=38289 July 8th, 2024 – In the rapidly evolving world of embedded systems, the demand for reliable, efficient, and scalable solutions is paramount. This is where RaimaDB and QNX, a division of BlackBerry, come into play. Together, they form a robust foundation for developing high-performance applications across various verticals, from automotive and industrial automation to medical devices and telecommunications.

 

RaimaDB: Pioneering Embedded Database Solutions

RaimaDB, a leading high-performance embedded database, is designed to meet the stringent demands of embedded systems. Its lightweight architecture and fast data processing capabilities make it ideal for resource-constrained environments. Key features include:
• Lightweight and Fast: Optimized for minimal footprint and high-speed data processing.
• ACID Compliance: Ensures data integrity and reliability.
• Multi-core Support: Enhances performance using modern multi-core processors.
• Real-time Data Management: Facilitates timely data handling and analysis.
• Cross-Platform Compatibility: Supports various operating systems for flexible deployment.
• From the edge to the cloud: From the edge to the cloud: RaimaDB enables clients to push data between QNX targets, from the edge to a cloud, using either third-party replication or RaimaDB’s proprietary replication technology.
• Cybersecurity: Raima enables clients to have stronger cybersecurity protection through Raima’s encryption functionality and SSL support.
• Standards: RaimaDB fully follows the MISRA C++’12/AUTOSAR14 development standard for utmost safety.

 

QNX: The Benchmark in Embedded Operating Systems

QNX® is renowned for its unmatched reliability, security, and performance in embedded operating systems. Its key features include:
• Microkernel Architecture: High reliability and security by isolating critical components.
• Real-time Performance: Deterministic response times for time-sensitive applications.
• Safety Certifications: Compliance with industry standards such as ISO 26262 and IEC 61508.
• Security: Robust features to protect against cyber threats.
• Scalability: Supports a wide range of hardware platforms.

 

A Synergistic Partnership

The integration of RaimaDB with QNX creates a formidable solution for developing embedded systems, offering:
• Enhanced Performance: Combines RaimaDB’s high-speed data processing with the QNX operating system’s real-time performance.
• Reliability and Integrity: Allows for robust data integrity and system reliability.
• Real-time Data Processing: Ideal for applications requiring immediate data response.
• Security and Safety: Meets stringent safety and security standards.

 

Expanding Horizons in Mobile Robotics and Autonomous Systems

The collaboration between RaimaDB and QNX extends its impact into the mobile robotics category, including UAVs, drones, and autonomous vehicles. These systems require robust, real-time data processing and reliability to navigate complex environments and perform critical tasks efficiently. Together, RaimaDB and QNX provide an out-of-the-box platform that integrates seamlessly into an ecosystem of advanced mobile robotics solutions.

 

Application Verticals

Automotive: Advanced Driver Assistance Systems (ADAS), infotainment systems, and V2X communication benefit from real-time processing, reliability, and security features.

Industrial Automation: Real-time control and monitoring are crucial. The scalability of RaimaDB and QNX allows deployment from small machinery to large factory floors.

Medical Devices: Suitable for patient monitoring systems, diagnostic equipment, and surgical robots, ensuring timely data processing and robust security for sensitive data.

Telecommunications: Manages large data volumes and real-time processing in network management systems, communication infrastructure, and IoT devices.

 

Join the Ecosystem

The partnership between RaimaDB and QNX is more than a technological integration; it’s an invitation to join a thriving ecosystem. Together, we are building a platform that supports innovation and excellence in embedded systems.

 

For more information on how RaimaDB and QNX can enhance your embedded solutions, please contact:

Scott Meder
Director of Sales at Raima
[email protected]

 

About RaimaDB
RaimaDB is a high-performance embedded database solution provider, offering robust and efficient data management for embedded systems.

Media contact:
Fredrik Sande
Marketing Manager
[email protected]

]]>
RaimaDB – Reliability through Failure Simulation https://raima.com/raimadb-reliability-through-failure-simulation/ Mon, 06 May 2024 17:06:09 +0000 https://raima.com/?p=38264

RaimaDB showcases a commitment to reliability and robustness, particularly through its approach to failure simulation. The system’s memory allocation mechanism allows for two modes of memory provision: it can receive memory as a single large chunk from the caller (the user of the RDM API) or in large chunks from the operating system. Moreover, the exact algorithm used for managing this memory can be selected at compile time, offering further customization to optimize performance and resource management. This approach provides the necessary flexibility for managing memory according to the specific requirements of the application.


RaimaDB’s Memory Allocation and Failure Simulation

An essential aspect of RaimaDB’s memory management strategy is the inclusion of a failure simulation algorithm within one of its memory allocation implementations. This algorithm intentionally induces failures at set points in the allocation process. The objective of this strategy is to test the database system’s resilience under stress and to verify its capability to continue operation under conditions where memory availability is limited. By introducing failure points deliberately within the allocation process, RaimaDB aims to enhance the robustness of the system and to obtain insights into the system’s behavior under adverse conditions, contributing to the overall goal of improving database system reliability.

The failure simulation feature within RaimaDB is integrated into its Quality Assurance (QA) framework, which consists of a suite of tests developed in C/C++. These tests, initially not crafted with failure simulation specifically in mind, are transactional by design. This means that should failures occur, any resources allocated by a test are explicitly freed. This design principle aids in incorporating failure simulation into the testing process.

With failure simulation activated, the QA framework can run tests in a mode where memory allocations are designed to fail intermittently. The framework is responsible for verifying several key outcomes: it checks that no resources are leaked after a failure, confirms that no additional allocations are made post-failure, and ensures that the test or test case ends in failure as intended. This process is central to evaluating the system’s ability to handle and recover from allocation failures, ensuring that RaimaDB maintains its system integrity and resource management effectively.

The default operational mode when doing failure simulation in the QA framework initiates with a run to count the total memory allocations, preparing for targeted failure testing. Following this, a sequence of tests is executed, each designed to induce a failure at a consecutive memory allocation point, starting from the first and proceeding until every allocation has been sequentially challenged. This approach methodically simulates failures across all potential points, effectively preparing the system for a wide range of failure scenarios.

While the systematic failure simulation provides a robust foundation for testing, it is not entirely sufficient for the nuanced process of debugging and addressing issues that arise. To enhance the efficacy of this process, the QA framework allows for additional flexibility through command-line parameters when executing tests. Developers can specify these parameters to trigger failure simulation at a particular allocation, within a defined range of allocations, or to continue simulation from a specified point. This functionality is crucial for efficiently pinpointing and rectifying issues, enabling developers to focus on specific failure scenarios without the need to retest previously validated allocations. It streamlines the debugging process, allowing for a more targeted and effective resolution of issues, and facilitates the continuation of testing beyond fixed problems, avoiding unnecessary repetition of tests for known successful scenarios.

The classical approach to debugging issues discovered through failure simulation involves setting up the environment to induce a failure at a specific allocation and then running this scenario within a debugger. However, this method doesn’t always provide the comprehensive insight needed to effectively diagnose and resolve issues. In many cases, developers find it necessary to break execution at points earlier than the failure to collect additional information. This need arises because understanding the root cause of an issue often requires insights from both before and after the point of failure, and the exact information needed can be unpredictable.

Moreover, when running tests that result in failures, particularly those leading to crashes, there is a possibility that persisted files might be left in a state that slightly differs from their original condition. Such discrepancies can result in consecutive runs not being entirely identical to previous ones, further complicating the debugging process. This variability underscores the challenge of relying solely on traditional debugging techniques, as the dynamic nature of failure scenarios necessitates a more flexible and comprehensive approach to gathering diagnostic information.


Streamlining the Debugging Processes

One effective solution to the complexities of traditional debugging is the use of rr (record and replay), a lightweight tool designed for recording and deterministic debugging. rr allows developers to capture the execution of a test case that culminates in a failure, ensuring an exact replication of the events leading to the issue for later analysis. This capability is crucial for understanding the precise conditions under which failures occur, as it eliminates the inconsistencies inherent in repeatedly running live tests.

Moreover, rr enhances the debugging process by enabling developers to step into, over, and out of code execution in reverse. This feature is particularly valuable because it allows for detailed examination of the program’s state at any point, without the need to restart the test from the beginning if the session progresses too far. Such reverse execution control means developers can efficiently navigate through the program’s execution timeline, pinpointing the exact moment and context of the failure.

Integrating rr into the debugging workflow not only streamlines the identification of issues by providing consistent and repeatable test conditions but also offers unparalleled flexibility in analyzing the program’s behavior. Developers can dissect the execution flow with precision, moving backward to uncover the sequence of events leading to a failure, thereby significantly reducing the time and effort required to isolate and resolve problems. This advanced approach to debugging, facilitated by rr, ensures a more effective and thorough investigation of failures, enhancing the overall reliability and robustness of the RaimaDB.

Utilizing rr’s command-line interface for debugging provides a text-based interaction within the terminal, akin to initiating gdb directly. However, in the context of modern development environments, this approach may not fully leverage the capabilities offered by current technologies. Visual Studio Code, a widely-used code editor, features an extension named Midas that significantly enhances the debugging experience with rr. Midas offers a graphical interface for debugging rr recordings, aligning with the standard debugging experience in Visual Studio Code but with the added benefits of rr’s unique functionalities, such as reverse execution.

With Midas, our developers gain the ability to debug a recording as if it were a live execution, including the capability to set hardware watchpoints and reverse back to them, allowing the debugger to break at precisely those points. This integration between rr and Visual Studio Code through Midas has proven to provide an exceedingly efficient workflow for our developers. The ability to seamlessly navigate forward and backward in code execution, coupled with the graphical interface’s intuitive controls, significantly reduces the complexity of debugging intricate issues.


Conclusion

In conclusion, the combination of RaimaDB’s innovative failure simulation within its QA framework, the strategic use of rr for detailed and deterministic debugging, and the integration of Midas for an enhanced graphical debugging experience, collectively forms a highly effective and efficient approach to ensuring software reliability. This comprehensive testing and debugging strategy not only rigorously assesses the system’s resilience to memory allocation failures but also upholds the integrity and efficiency of resource management. By embracing these advanced tools and methodologies, RaimaDB reinforces its commitment to delivering a robust and reliable database management system, capable of meeting the demands of today’s complex and dynamic software environments.RaimaDB showcases a commitment to reliability and robustness, particularly through its approach to failure simulation. The system’s memory allocation mechanism allows for two modes of memory provision: it can receive memory as a single large chunk from the caller (the user of the RDM API) or in large chunks from the operating system. Moreover, the exact algorithm used for managing this memory can be selected at compile time, offering further customization to optimize performance and resource management. This approach provides the necessary flexibility for managing memory according to the specific requirements of the application.

]]>