JSON Lines Table (JSONLT) 1.0

Living Standard,

This version:
https://jsonlt.org/latest/
Issue Tracking:
GitHub
Editor:

Abstract

This specification defines JSONLT, a format and interface for storing keyed records in a single file using an append-only operation log. The format uses JSON Lines [JSONL] as its physical representation, producing files that are both human-readable and optimized for version control systems.

1. Introduction

This section is non-normative.

A JSONLT file contains a sequence of operations—insertions and deletions—rather than a snapshot of current state. The logical state of the table (the current set of records after all operations have been applied; see § 4.7 Logical state) is computed by replaying these operations in order. This append-only design ensures that modifications produce minimal diffs when the file is tracked in version control.

1.1. Purpose

JSONLT provides a lightweight, portable format for storing keyed records that:

1.2. Scope

This specification defines:

2. Terminology

This specification uses terminology from the Infra Standard [INFRA] where applicable, including string, byte sequence, list, and map.

Note: The Infra Standard defines "string" as a sequence of code points, which can include surrogate code points. JSONLT uses this definition, but § 5.5 String values recommends against unpaired surrogates for interoperability. Implementations can use Infra’s "scalar value string" (which excludes surrogates) if they reject strings with unpaired surrogates.

A JSON object is an unordered collection of property names (strings) and JSON values, as defined in [RFC8259]. (This definition restates RFC 8259 for convenience of reference within this specification.)

Two JSON values are logically equal if any of the following conditions holds:

The nesting depth of a JSON value is the maximum number of nested JSON objects and arrays at any point within that value, where the outermost value is at depth 1. A primitive value (null, boolean, number, or string) has nesting depth 1. An empty object or array has nesting depth 1. An object or array containing only primitive values has nesting depth 2.

2.1. Notation

This specification uses a language-agnostic notation to describe abstract interfaces. The notation conventions are inspired by [RFC9622].

Object creation is written as:

object := Constructor(param, optionalParam?)

This creates an object by invoking a constructor with the given parameters. Parameters marked with ? are optional.

Method invocation is written as:

result := object.method(param)

This invokes a method on an object and assigns the return value. When a method returns no meaningful value, the assignment is omitted:

object.action()

The basic types used in this specification are:

The compound types are:

Implementations SHOULD map these abstract types to idiomatic constructs in their target language. See § 17 Implementation mapping for guidance.

3. Conformance

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.

Implementations MAY conform as a parser, generator, or both, as defined in the following subsections. A conforming implementation MAY provide additional functionality not described here, provided such functionality does not alter the behavior of operations defined in this specification.

3.1. Parser

A conforming parser is an implementation that reads JSONLT files.

A conforming parser SHALL:

  1. Accept any file that conforms to § 5 Physical format.

  2. Implement the read a table file and compute the logical state algorithms.

  3. Signal errors as defined in § 6 Exceptions.

A conforming parser SHOULD preserve unrecognized fields whose names begin with $ to support forward compatibility with future specification versions.

A conforming parser MAY accept non-conforming input, provided that it produces a diagnostic indicating the non-conformance and processes valid lines normally. A conforming parser MAY recover from the following deviations:

A conforming parser SHOULD NOT attempt recovery for:

Note: This specification does not define separate "lenient parser" or "strict parser" profiles. A conforming parser MAY implement recovery for the deviations listed above; implementations that do so remain conforming parsers. An implementation that rejects non-conforming input outright is equally conforming.

A conforming parser MAY omit write operations.

3.2. Generator

A conforming generator is an implementation that writes JSONLT files.

A conforming generator SHALL:

  1. Produce output that conforms to § 5 Physical format.

  2. Use deterministic serialization for all output.

  3. Reject records that violate § 4.3 Record, including records with field names beginning with $.

A conforming parser SHALL accept any file produced by a conforming generator.

A conforming generator MAY omit read operations.

3.3. Claiming conformance

An implementation claiming conformance to this specification SHOULD document:

Optional features that implementations MAY support include:

A conformance claim MAY use the form: "This implementation conforms to JSONLT 1.0 as a [parser|generator|parser and generator]."

4. Constructs

4.1. Key

A key identifies a record within a table (see § 4.8 Table). A key is one of:

A key element is a string or integer that may appear as a component of a tuple key.

A valid key is a key that is either a string, an integer within the range [−(253)+1, (253)−1], or a non-empty tuple of valid key elements. A valid key element is a string or an integer within the range [−(253)+1, (253)−1].

A scalar key is a key that is either a string or an integer (but not a tuple).

A conforming generator SHALL NOT produce tuple keys with zero elements; a conforming parser SHALL reject empty arrays where a tuple key is expected. A tuple key SHALL NOT exceed 16 elements; a conforming parser or conforming generator SHALL signal a limit error when this limit is exceeded.

A JSON number is considered an integer if, when converted to an IEEE 754 double-precision value, it has no fractional component and falls within the specified range. The numbers 1, 1.0, and 1e0 are all considered the integer 1. The number 1.5 is not an integer. The number 9007199254740992 (253) is not a valid integer key because it exceeds the range; IEEE 754 double-precision can represent integers exactly only within the range [−(253)+1, (253)−1].

A conforming parser SHALL reject key fields containing numbers that, when parsed as IEEE 754 double-precision, result in Infinity, -Infinity, or NaN. A conforming parser SHALL signal a key error for such values.

A conforming parser SHALL normalize integer keys to their canonical numeric value before comparison. Implementations SHALL NOT distinguish between equivalent numeric representations (for example, 1, 1.0, 1e0 SHALL all map to the same key).

A conforming generator SHALL NOT produce keys that are null, boolean, array (except as a tuple of valid key elements), or object. A conforming parser SHALL treat such values as errors when encountered where a key is expected.

An empty string ("") is a valid string key.

Table using an empty string as a key:
{"$jsonlt": {"version": 1, "key": "id"}}
{"id": "", "name": "Default record"}
{"id": "alice", "name": "Alice"}

The empty string "" and "alice" are distinct keys.

4.1.1. Key equality

Keys are compared for equality as follows:

Unicode normalization is not performed during key comparison. A conforming parser or conforming generator MAY normalize keys to NFC before comparison and storage; if so, the implementation SHALL document this behavior and apply it consistently.

Unicode normalization affects key equality. Consider this file:
{"$jsonlt": {"version": 1, "key": "name"}}
{"name": "caf\u00E9", "order": 1}
{"name": "cafe\u0301", "order": 2}

The first key uses U+00E9 (precomposed é: caf\u00E9), while the second uses U+0065 U+0301 (e + combining acute: cafe\u0301). Both render as "café" but are different code point sequences. Without normalization, the table would contain two distinct records; with NFC normalization, the second would replace the first.

4.1.2. Key ordering

Keys are ordered as follows for operations that require ordering (such as compaction and iteration):

When comparing keys of different types, integers are ordered before strings, and strings are ordered before tuples.

When this specification refers to keys "in ascending order" or "in ascending key order," it means sorted from lowest to highest according to the ordering defined in this section.

Key ordering examples:

4.2. Key specifier

A key specifier defines how to extract a key from a record. A key specifier is one of:

A valid key specifier is a key specifier that is either a string, or a tuple containing at least one field name with no duplicate field names.

Two key specifiers match if, after normalizing single-element tuples to strings, they are structurally identical and each field name consists of the same sequence of Unicode code points. For example, "id" and ["id"] match because ["id"] normalizes to "id", while ["org", "id"] matches only ["org", "id"] (or the equivalent single-element-normalized form, which in this case is unchanged).

A conforming generator SHALL NOT produce key specifier tuples with zero field names; a conforming parser SHALL reject empty arrays where a key specifier is expected.

A conforming generator SHALL NOT produce key specifier tuples with duplicate field names; a conforming parser SHALL reject such tuples.

Given a key specifier, the extract a key algorithm extracts the corresponding key from a record.

Note: A single-element tuple key specifier (for example, ["id"]) produces scalar keys, not tuple keys. This normalization ensures that "id" and ["id"] are interchangeable key specifiers—both extract the same scalar key from a record. Tuple keys are only produced when the key specifier contains two or more field names.

Single-element tuple key specifier normalization. These two headers are equivalent:
{"$jsonlt": {"version": 1, "key": "id"}}
{"$jsonlt": {"version": 1, "key": ["id"]}}

Both extract the scalar key "alice" (not the tuple ["alice"]) from the following record:

{"id": "alice", "name": "Alice"}

In contrast, a two-element key specifier produces tuple keys:

{"$jsonlt": {"version": 1, "key": ["org", "id"]}}
{"org": "acme", "id": "alice", "name": "Alice"}

Here the key is the tuple ["acme", "alice"].

A key field is a field whose name is specified by the table’s key specifier.

Note: Field names in a key specifier can contain any characters valid in JSON property names, including spaces, newlines, and other special characters. For example, {"key": "field with spaces"} is a valid header with a key specifier containing a space.

4.3. Record

A record is a JSON object that contains at minimum the fields REQUIRED by the table’s key specifier, with values that are valid keys or valid key elements. A record containing only the key fields (with no additional data fields) is valid.

Field names beginning with $ are reserved for use by this specification and future extensions. A conforming generator SHALL NOT produce records containing fields whose names begin with $. A conforming parser encountering a record with unrecognized $-prefixed fields SHOULD preserve them for forward compatibility with future specification versions.

4.4. Tombstone

A tombstone is a JSON object that marks the deletion of a record. A tombstone contains:

A tombstone contains only the key fields and $deleted. A conforming generator SHALL NOT produce tombstones with additional fields.

When reading a file, if an object contains $deleted with a value other than the boolean true, a conforming parser SHALL treat this as a parse error. If an object contains $deleted: true along with fields other than the key fields, a conforming parser SHOULD treat it as a valid tombstone (ignoring extra fields).

Tombstone with extra fields (accepted when recovering from non-conforming input):
{"$deleted": true, "id": "alice", "reason": "user requested deletion"}

A conforming parser SHOULD accept this tombstone, treating "reason" as extraneous and ignoring it. A conforming generator SHALL NOT produce tombstones with extra fields.

4.5. Operation

An operation is a transformation that modifies the logical state of a table. There are two kinds of operations:

An upsert operation (or simply upsert) inserts a new record or replaces an existing record with the same key. It is represented by a record.

A delete operation (or simply delete) removes a record from the logical state. It is represented by a tombstone.

When replayed in sequence, operations determine the current contents of the table.

The determine the operation type algorithm inspects a parsed JSON object to determine whether it represents an upsert operation or delete operation.

A header is an optional first line in a JSONLT file that provides metadata about the file. A header is a JSON object containing a single field $jsonlt whose value is an object with the following fields:

version (REQUIRED)
An integer specifying the JSONLT specification version. For this specification, the value SHALL be 1. A conforming parser SHALL reject the file with a parse error indicating an unsupported version if the version field contains a value other than 1.
key (optional)
The key specifier for the table, as a string or array of strings.
$schema (optional)
A string containing a URL reference to a [JSON-SCHEMA] that validates records in this table.
schema (optional)
A [JSON-SCHEMA] object that validates records in this table. Mutually exclusive with $schema; if both are present, a conforming parser SHALL treat this as a parse error.
meta (optional)
An arbitrary JSON object containing user-defined metadata.

A conforming parser SHOULD preserve unknown fields in the $jsonlt object for forward compatibility.

Minimal header (version only):
{"$jsonlt": {"version": 1}}
Header with key specifier:
{"$jsonlt": {"version": 1, "key": "id"}}
Header with external schema:
{"$jsonlt": {"version": 1, "key": "id", "$schema": "https://example.com/users.schema.json"}}
Header with inline schema and compound key:
{"$jsonlt": {"version": 1, "key": ["org", "id"], "schema": {"type": "object", "properties": {"org": {"type": "string"}, "id": {"type": "integer"}, "name": {"type": "string"}}}, "meta": {"created": "2025-01-15"}}}

Note: Because each line in a JSONLT file is a complete JSON object, inline schemas are serialized on a single line. For large schemas, consider using $schema to reference an external schema document instead.

Table with integer keys (numeric identifiers):
{"$jsonlt": {"version": 1, "key": "id"}}
{"id": 1001, "name": "Widget", "price": 9.99}
{"id": 1002, "name": "Gadget", "price": 19.99}
{"id": 1003, "name": "Sprocket", "price": 4.99}

The keys are the integers 1001, 1002, and 1003.

Table with compound (tuple) keys:
{"$jsonlt": {"version": 1, "key": ["org", "id"]}}
{"org": "acme", "id": 1, "name": "Alice", "role": "admin"}
{"org": "acme", "id": 2, "name": "Bob", "role": "user"}
{"org": "globex", "id": 1, "name": "Carol", "role": "admin"}
{"$deleted": true, "org": "acme", "id": 2}

The keys are the tuples ["acme", 1], ["acme", 2], and ["globex", 1]. After the delete operation, only ["acme", 1] and ["globex", 1] remain. Note that ["acme", 1] and ["globex", 1] are distinct keys because their first elements differ.

Table with a three-element compound key:
{"$jsonlt": {"version": 1, "key": ["region", "org", "id"]}}
{"region": "us-east", "org": "acme", "id": 1, "name": "Alice"}
{"region": "us-east", "org": "acme", "id": 2, "name": "Bob"}
{"region": "eu-west", "org": "acme", "id": 1, "name": "Carol"}

The keys are the tuples ["us-east", "acme", 1], ["us-east", "acme", 2], and ["eu-west", "acme", 1]. All three are distinct because they differ in at least one element position.

Table with integer keys at boundary values:
{"$jsonlt": {"version": 1, "key": "id"}}
{"id": 9007199254740991, "name": "Maximum valid integer key"}
{"id": -9007199254740991, "name": "Minimum valid integer key"}
{"id": 0, "name": "Zero is valid"}
{"id": -1, "name": "Negative integers are valid"}

The values 9007199254740991 ((253)−1) and -9007199254740991 (−(253)+1) are the maximum and minimum valid integer keys. The value 9007199254740992 (253) would NOT be a valid integer key because it exceeds the interoperable integer range.

If a file’s first line is a valid JSON object containing the field $jsonlt, a conforming parser SHALL interpret that line as a header, not as an operation. If the first line is any other valid JSON object, the file has no header.

A file containing only a header (no operations) is valid and represents a table with no records.

When opening a table, if both the header and the caller provide a key specifier, they SHALL match. If they do not match, a conforming parser or conforming generator SHALL treat this as an error.

Files without headers are valid; a conforming parser SHALL treat them as version 1.

4.7. Logical state

The logical state of a table is a map from keys to records, representing the current contents of the table after all operations have been applied.

Note: In this specification, "record" refers to the JSON object stored in the file, while the logical state is a map containing key-record pairs (entries). When this specification refers to "records in the logical state," it means the record values stored in that map.

The compute the logical state algorithm replays a list of operations to produce the logical state.

4.8. Table

A table is the central construct in JSONLT. A table has:

Note: In this specification, "file" refers to the physical storage (the JSONLT file on disk), while "table" refers to the logical construct that includes the parsed content and computed state. A table is backed by a file; operations on a table modify its underlying file.

A table provides operations to read, write, and query records. All write operations append to the underlying file; the file is never modified in place during normal operation. Only compaction replaces the file.

4.9. Transaction

A snapshot is a copy of a table’s logical state at a specific point in time. Within a transaction (defined below), the snapshot includes any modifications made within that transaction.

A buffer is a list of pending operations that have not yet been written to the underlying file.

A transaction is a context for performing multiple operations atomically. A transaction has:

While a transaction is active:

When a transaction commits:

  1. Acquire the exclusive file lock.

  2. Reload the file if it has been modified since the transaction started.

  3. For each key written by the transaction, if that key has been modified in the reloaded state compared to the transaction’s starting state, the commit SHALL fail with a conflict error.

  4. If no conflicts, append all buffered operations to the file as a single write operation.

  5. Release the lock.

When a transaction aborts:

A conforming parser or conforming generator implementing transactions SHALL NOT permit nested transactions. Attempting to start a transaction while one is already active SHALL result in an error.

Transaction support is an optional feature. An implementation supporting transactions SHALL conform to both conforming parser and conforming generator requirements; it is not a separate third conformance profile. An implementation that conforms to only one profile MAY omit transaction operations.

JSONLT uses an optimistic concurrency model: transactions do not hold locks during execution, only at commit time. Conflict detection is based on write-write conflicts only—if two transactions write to the same key, the second to commit fails. Read-write conflicts (where a transaction reads a value that another transaction subsequently modifies) are not detected. Applications requiring stronger isolation guarantees SHOULD implement additional coordination at the application level.

Transaction workflow—successful commit:
tx := table.transaction()          (capture snapshot)
record := tx.get("alice")          (read from snapshot)
record.balance := record.balance + 100
tx.put(record)                     (buffer the write)
tx.commit()                        (acquire lock, check conflicts, write)

Transaction workflow—conflict scenario:

Process A                          Process B
─────────────────────────────────  ─────────────────────────────────
txA := table.transaction()         txB := table.transaction()
recA := txA.get("alice")           recB := txB.get("alice")
recA.balance := recA.balance + 50  recB.balance := recB.balance + 25
txA.put(recA)                      txB.put(recB)
txA.commit() → succeeds
                                   txB.commit() → conflict error
                                   (Process B needs to retry with new transaction)

In the conflict scenario, both transactions read the same starting state and modify the same key. When Process A commits first, it succeeds. When Process B attempts to commit, conflict detection finds that "alice" was modified since B’s transaction started, causing a conflict error.

5. Physical format

5.1. Encoding

A JSONLT file is a UTF-8 encoded text file without a byte order mark (BOM), conforming to [RFC8259].

A conforming generator SHALL NOT produce a BOM.

A conforming parser SHOULD strip any BOM encountered at the start of the file. This allows parsers to interoperate with files produced by systems that add BOMs, even though conforming JSONLT generators do not produce them.

A conforming parser SHALL reject byte sequences that are overlong encodings (such as encoding ASCII characters with multiple bytes), as these are not valid UTF-8 per Unicode 16.0 Section 3.9 and [RFC3629]. Overlong encodings have historically been used in security attacks to bypass input validation.

Note: This specification references Unicode 16.0 for UTF-8 encoding requirements. Character properties and normalization forms can be interpreted according to Unicode 16.0 or later versions that maintain backward compatibility.

5.2. Media type

The media type for JSONLT files is application/vnd.jsonlt.

The file extension for JSONLT files is .jsonlt.

Note: At the time of publication, this media type has not been registered with IANA. The vnd. prefix indicates a vendor-specific type per [RFC6838].

5.3. Line structure

An empty file (zero bytes) is valid and represents a table with no header and no operations. A conforming parser or conforming generator SHALL signal an error when opening an empty file without a key specifier.

Opening an empty file:
table := Table("/path/to/empty.jsonlt", "id")
count := table.count()   (returns 0)
table.put({"id": "first", "data": "example"})

The empty file requires a key specifier on open. After the put operation, the file contains one line.

Each line contains exactly one JSON object followed by a newline character (U+000A). A conforming generator SHALL produce only JSON objects, one per line. A conforming parser SHALL reject lines that contain valid JSON values other than objects. A conforming generator SHALL ensure non-empty files end with a newline character.

Note: JSONLT files are a subset of JSON Lines files. Every valid JSONLT file is a valid JSON Lines file, but not every JSON Lines file is valid JSONLT. JSON Lines permits any JSON value per line (strings, numbers, arrays, objects, booleans, or null), while JSONLT requires each line to be a JSON object. This restriction supports the key-value data model where each line represents a record or tombstone. Tools that read JSON Lines will accept JSONLT files, but JSONLT parsers will reject JSON Lines files containing non-object values.

A conforming generator SHALL NOT produce carriage return characters (U+000D). A conforming parser SHOULD strip CR characters preceding LF.

Note: JSONLT requires LF-only line endings for output, which is stricter than JSON Lines (which permits both LF and CRLF). This ensures consistent file hashes and diffs across platforms. Files created with CRLF (for example, on Windows systems not using JSONLT generators) are technically non-conforming output, but conforming parsers can accept them by stripping CR characters.

Note: [JSONL] is a community convention documented at https://jsonlines.org/, not a formal standards-track specification. Where JSONLT requirements differ from or extend JSON Lines conventions (such as the LF-only requirement), JSONLT requirements take precedence. This specification is self-contained and does not depend on future changes to JSON Lines documentation.

A conforming generator SHALL NOT produce empty lines (lines containing no characters before the newline). A conforming parser SHOULD skip empty lines.

A conforming parser SHALL signal a parse error diagnostic when encountering a line containing only whitespace. After signaling the error, a conforming parser MAY either halt processing or skip the line and continue. The [JSONLT-TESTS] suite expects parsers that halt to reject with PARSE_ERROR; parsers that continue may pass by producing the expected state.

If the file does not end with a newline character and the final line is valid JSON, a conforming parser SHALL accept it. If the file does not end with a newline character and the final line is not valid JSON (truncated due to crash or partial write), a conforming parser SHOULD ignore the malformed final line and process all preceding valid lines.

A conforming generator SHALL NOT produce JSON objects with duplicate keys. A conforming parser SHALL treat JSON objects containing duplicate keys as parse errors.

Note: This duplicate key requirement is stricter than [RFC8259], which uses "SHOULD" rather than "MUST" for unique keys. JSONLT uses "SHALL" (equivalent to "MUST" per [RFC2119]) and requires unique keys to ensure deterministic parsing and consistent key extraction.

5.4. Deterministic serialization

Deterministic serialization is a JSON serialization that produces consistent output for identical logical data. A conforming generator SHALL serialize JSON objects according to the following rules:

These rules ensure consistent output but do not guarantee byte-identical results across implementations due to variations in number formatting and string escaping.

Number formatting for non-key numeric values (integer vs. exponential notation, trailing zeros) is not constrained by this specification. Two conforming generators may produce different representations for the same numeric value (for example, 1000000 vs. 1e6). Applications requiring byte-identical output SHOULD normalize numeric values before storage or use [RFC8785].

Note: Implementations requiring byte-identical output across all implementations can conform to [RFC8785] (JSON Canonicalization Scheme, or JCS). JCS is informative; JSONLT does not require JCS conformance.

Deterministic serialization sorts object keys by Unicode code point:

Input (logical): {"zebra": 1, "apple": 2, "Banana": 3}

Output (serialized): {"Banana":3,"apple":2,"zebra":1}

Note that uppercase letters (U+0041-U+005A) sort before lowercase letters (U+0061-U+007A) in Unicode code point order.

5.5. String values

String values within records MAY contain any valid JSON string content, including escaped newlines (\n), tabs (\t), and other control characters. The JSON encoding ensures that literal newline characters never appear within a JSON string value, preserving the one-record-per-line property.

A conforming generator SHALL use standard JSON string escaping and SHALL escape characters that [RFC8259] requires to be escaped. A conforming generator SHOULD NOT escape characters that do not require escaping.

A conforming generator SHALL reject records containing string values with unpaired surrogate code points (U+D800 through U+DFFF not part of a valid surrogate pair). A conforming parser encountering unpaired surrogates SHOULD accept them but MAY issue a diagnostic.

Note: This requirement ensures cross-language interoperability. While JSON (via RFC 8259) technically permits unpaired surrogates, many programming languages (Rust, Go, Swift) cannot represent them in native string types. Requiring generators to reject unpaired surrogates ensures files can be read by implementations in any language.

5.6. Operation encoding

Each non-header line in a JSONLT file represents one operation. A conforming parser determines the operation type by inspecting the parsed object:

Given a table with key specifier "id", the following file content:
{"$jsonlt": {"version": 1, "key": "id"}}
{"id": "alice", "role": "user"}
{"id": "bob", "role": "admin"}
{"id": "alice", "role": "admin"}
{"$deleted": true, "id": "bob"}

The first line is a header. The subsequent lines represent this sequence of operations:

  1. upsert record with key "alice", value {"id": "alice", "role": "user"}

  2. upsert record with key "bob", value {"id": "bob", "role": "admin"}

  3. upsert record with key "alice", value {"id": "alice", "role": "admin"}

  4. delete record with key "bob"

The resulting logical state contains one record:

| Key | Record | |-----|--------| | "alice" | {"id": "alice", "role": "admin"} |

Equivalently, calling table.all() returns:

[{"id": "alice", "role": "admin"}]

6. Exceptions

A conforming parser or conforming generator SHALL signal errors for the following conditions. Implementations SHOULD define specific error types or codes for each category.

6.1. Parse errors (ParseError)

A parse error is an error that occurs when reading a JSONLT file due to malformed content. A conforming parser SHALL signal a parse error for the following conditions:

Examples of invalid JSONLT content that would cause parse errors:

Invalid JSON (missing closing brace):

{"id": "alice", "name": "Alice"

Valid JSON but not an object:

["id", "alice"]

Invalid $deleted value (requires boolean true):

{"$deleted": "yes", "id": "alice"}

Duplicate JSON keys:

{"id": "alice", "name": "Alice", "id": "bob"}

6.2. Key errors (KeyError)

A key error is an error that occurs when a key or key specifier is invalid or inconsistent. A conforming parser or conforming generator SHALL signal a key error for the following conditions:

Examples of invalid records that would cause key errors (assuming key specifier "id"):

Missing key field:

{"name": "Alice", "role": "admin"}

Invalid key type (null):

{"id": null, "name": "Alice"}

Invalid key type (boolean):

{"id": true, "name": "Alice"}

Reserved $-prefixed field in record:

{"id": "alice", "name": "Alice", "$custom": "data"}

6.3. File errors (IOError)

An IO error is an error that occurs during file system operations. A conforming parser or conforming generator SHALL signal an IO error for the following conditions:

6.4. Lock errors (LockError)

A lock error is an error that occurs when file locking fails. A conforming parser or conforming generator SHALL signal a lock error for the following conditions:

6.5. Limit errors (LimitError)

A limit error is an error signaled when content exceeds implementation limits.

Mandatory limits: A conforming parser or conforming generator SHALL signal a limit error when:

Optional limits: A conforming parser or conforming generator MAY signal a limit error when:

Implementations SHALL document their limits.

6.6. Transaction errors (TransactionError)

A conflict error is an error that occurs when a transaction commit detects that another process has modified a key that the transaction also modified.

A conforming parser or conforming generator implementing transactions SHALL signal the following transaction errors:

7. API

This section defines the abstract interface that a conforming parser or conforming generator SHALL provide (for operations applicable to each profile). The notation follows the conventions defined in § 2.1 Notation.

7.1. Table constructor

table := Table(path, key?, options?)

Creates or opens a table backed by the file at path. The path parameter is a String. The key parameter is the key specifier; if the file has a header with a key specifier, the provided key SHALL match or be omitted. The options parameter is a TableOptions object; if options is not provided, default values are used. A conforming parser or conforming generator SHALL execute the open a table algorithm. If the file exists, its contents are loaded. If the file does not exist, it will be created on first write.

7.2. TableOptions

autoReload
Boolean, default true. If true, check for external file modifications before read operations by comparing the file’s modification time (mtime) to the last known value.
lockTimeout
Integer (milliseconds), optional. Maximum time to wait when acquiring file locks.

Note: Option names use camelCase to match common JSON naming conventions. Prose in this specification uses hyphenated forms (for example, "auto-reload") when referring to the behavior.

7.3. Read operations

record := table.get(key)

Executes the get a record algorithm. The key parameter is a key. Returns the record for key, or null if no such record exists.

exists := table.has(key)

Executes the check for a record algorithm. The key parameter is a key. Returns true if a record for key exists, false otherwise.

records := table.all()

Executes the get all records algorithm. Returns a List<Record> of all records in the table, in ascending key order.

keys := table.keys()

Executes the get all keys algorithm. Returns a List<Key> of all keys in the table, in ascending key order.

count := table.count()

Executes the count records algorithm. Returns the number of records in the table as an Integer.

records := table.find(predicate)

Executes the find records algorithm. The predicate parameter is a function that takes a Record and returns a Boolean. Returns a List<Record> of all records for which predicate returns true, in ascending key order.

record := table.findOne(predicate)

Executes the find one record algorithm. The predicate parameter is a function that takes a Record and returns a Boolean. Returns the first record (by ascending key order) for which predicate returns true, or null if none match.

7.4. Write operations

table.put(record)

Executes the put a record algorithm. The record parameter is a record. Inserts or updates record in the table. The key is extracted from record using the table’s key specifier.

deleted := table.delete(key)

Executes the delete a record algorithm. The key parameter is a key. Deletes the record for key. Returns true if the record existed, false otherwise.

table.clear()

Executes the clear all records algorithm. Deletes all records from the table by compacting to an empty state.

7.5. Transaction operations

tx := table.transaction()

Executes the begin a transaction algorithm and returns a transaction context. The transaction provides snapshot isolation: read operations within the transaction use the transaction’s snapshot, and write operations buffer changes until the transaction is committed or aborted.

tx.commit()

Executes the commit a transaction algorithm. Writes all buffered operations to the file atomically.

tx.abort()

Executes the abort a transaction algorithm. Discards all buffered operations without modifying the file.

7.6. Maintenance operations

table.compact()

Executes the compact a table algorithm. Rewrites the file as a minimal snapshot, sorted by key.

8. Algorithms

Algorithms in this specification use two forms of error indication:

8.1. Extracting a key

To extract a key from a record given a key specifier:
  1. If key specifier is a string:

    1. Let field be key specifier.

    2. If record does not have a field named field, return a key error indicating missing field.

    3. Let value be the value of record[field].

    4. If value is null, a boolean, an object, or an array, return a key error indicating invalid type.

    5. If value is a number with a fractional component, return a key error indicating value is not an integer.

    6. If value is a number outside the range [−(253)+1, (253)−1], return a key error indicating value out of range.

    7. Return value.

  2. If key specifier is a tuple:

    1. If key specifier contains zero field names, return a key error indicating empty key specifier.

    2. If key specifier contains duplicate field names, return a key error indicating duplicate fields.

    3. Let result be an empty list.

    4. For each field in key specifier:

      1. If record does not have a field named field, return a key error indicating missing field.

      2. Let value be the value of record[field].

      3. If value is null, a boolean, an object, or an array, return a key error indicating invalid type.

      4. If value is a number with a fractional component, return a key error indicating value is not an integer.

      5. If value is a number outside the range [−(253)+1, (253)−1], return a key error indicating value out of range.

      6. Append value to result.

    5. If result contains exactly one element, return that element.

    6. Return result as a tuple.

  3. Otherwise, return a key error indicating invalid key specifier.

Note: Step 1.4 triggers a key error when the key field value is null, a boolean, an object, or an array. These types are not valid as keys.

8.2. Determining the operation type

To determine the operation type from a parsed JSON object:
  1. If object contains a field named $deleted:

    1. If the value of $deleted is the boolean true, return delete.

    2. Otherwise, return a parse error.

  2. Otherwise, return upsert.

8.3. Computing the logical state

To compute the logical state from a list of operations using key specifier:
  1. Let state be an empty map.

  2. For each operation in list, in order:

    1. Let key be the result of extracting a key from operation using key specifier.

    2. If key is a key error, return key.

    3. Let type be the result of determine the operation type from operation.

    4. If type is a parse error, return type.

    5. If type is delete:

      1. Remove key from state if present.

    6. Otherwise (type is upsert):

      1. Set state[key] to operation (the record).

  3. Return state.

When the same key appears in multiple operations, the last operation in file order determines the key’s final state. This "last write wins" semantic means that later upserts replace earlier ones, and a delete removes any prior record regardless of how many times the key was previously written.

Implementations SHOULD include the line number in error messages when a key error or parse error occurs during logical state computation, to aid debugging.

Computing logical state step by step. Given key specifier "id" and operations:
{"id": "alice", "value": 1}
{"id": "bob", "value": 2}
{"id": "alice", "value": 3}
{"$deleted": true, "id": "bob"}

The algorithm proceeds as follows:

  1. Initial state: {} (empty map)

  2. Process line 1: {"id": "alice", "value": 1}

    • Extract key: "alice"

    • Operation type: upsert

    • Set state["alice"] to record

    • State: {"alice": {"id": "alice", "value": 1}}

  3. Process line 2: {"id": "bob", "value": 2}

    • Extract key: "bob"

    • Operation type: upsert

    • Set state["bob"] to record

    • State: {"alice": {...}, "bob": {"id": "bob", "value": 2}}

  4. Process line 3: {"id": "alice", "value": 3}

    • Extract key: "alice"

    • Operation type: upsert

    • Set state["alice"] to new record (replaces previous)

    • State: {"alice": {"id": "alice", "value": 3}, "bob": {...}}

  5. Process line 4: {"$deleted": true, "id": "bob"}

    • Extract key: "bob"

    • Operation type: delete

    • Remove "bob" from state

    • State: {"alice": {"id": "alice", "value": 3}}

Final logical state: One record with key "alice" and value 3. The record for "bob" was deleted, and "alice" was updated from value 1 to value 3.

8.4. Opening a table

To open a table at path with key specifier and options:
  1. If the file at path exists:

    1. Read and parse the file using the read a table file algorithm.

    2. If the file has a header and key specifier was provided:

      1. If the header’s key specifier does not match key specifier, return an error.

      2. Let effective key specifier be the header’s key specifier.

    3. If the file has a header and key specifier was not provided:

      1. Let effective key specifier be the header’s key specifier.

    4. If the file has no header and key specifier was not provided:

      1. Return an error indicating that a key specifier is REQUIRED when the file has no header.

    5. If the file has no header and key specifier was provided:

      1. Let effective key specifier be key specifier.

    6. If effective key specifier is not a valid key specifier, return an error.

    7. Compute the logical state from the operations using effective key specifier.

  2. If the file does not exist:

    1. If key specifier was not provided, return an error.

    2. If key specifier is not a valid key specifier, return an error.

    3. Initialize with an empty logical state.

  3. Return the table.

8.5. Reading a table file

To read a table file at path:
  1. If the file at path does not exist, return an empty list of operations and no header.

  2. Let bytes be the contents of the file at path as a byte sequence.

  3. Let text be bytes decoded as UTF-8. If decoding fails, return an error.

  4. If text begins with a BOM (U+FEFF), strip the BOM.

  5. Let endsWithNewline be true if text ends with a newline character (U+000A), false otherwise.

  6. Let lines be text strictly split on newline characters (U+000A). (Note: Strictly splitting an empty string produces a list containing one empty string; this is handled by step 10.2.)

  7. Strip any trailing CR (U+000D) from each line.

  8. Let header be null.

  9. Let operations be an empty list.

  10. Let lineNumber be 0.

  11. For each line in lines:

    1. Increment lineNumber.

    2. If line is empty:

      1. If this is the last element and endsWithNewline is true, continue (this is the expected trailing empty string from splitting).

      2. Otherwise, skip this line and continue.

    3. If line consists only of whitespace characters, signal a parse error. Skip this line and continue processing.

    4. Let object be the result of parsing line as JSON. The parser SHALL enforce the implementation’s maximum nesting depth; if the depth is exceeded, return a limit error.

    5. If parsing fails:

      1. If this is the last line and endsWithNewline is false, ignore this line and stop processing.

      2. Otherwise, return an error.

    6. If object is not a JSON object, return an error.

    7. If object contains duplicate keys, return an error.

    8. If object contains the field $jsonlt:

      1. If lineNumber is not 1, return a parse error (header can only appear on first line).

      2. If object[$jsonlt] is not a JSON object, return a parse error.

      3. Validate the header structure (REQUIRED version field, optional key, $schema, schema, meta fields). If invalid, return an error.

      4. Set header to the parsed header metadata.

    9. Otherwise:

      1. Validate that object is a valid operation (record or tombstone).

      2. If validation fails (for example, $deleted with non-boolean value), return an error.

      3. Append object to operations.

  12. Return header and operations.

8.6. Transaction operations

To begin a transaction on a table:
  1. If a transaction is already active on this table instance, return an error.

  2. If auto-reload is enabled, reload the file (acquiring a shared lock if the platform supports it, or briefly an exclusive lock otherwise, to ensure a consistent read).

  3. Let snapshot be a deep copy of the table’s logical state (both the map and all record values are copied; modifications to records in snapshot do not affect the original state).

  4. Let startState be a deep copy of the table’s logical state (for conflict detection at commit).

  5. Let buffer be an empty list.

  6. Return a transaction context with snapshot, startState, and buffer.

Note: See § 10 Concurrency for details on the optimistic concurrency model and locking behavior.

To perform a read operation within a transaction (used by get a record, check for a record, get all records, get all keys, count records, find records, and find one record):
  1. Use the transaction’s snapshot (as modified by writes within the transaction) instead of the table’s logical state.

  2. Return the result based on the snapshot state.

To perform a write operation within a transaction (used by put a record and delete a record):
  1. Extract a key from the record or construct the key for deletion.

  2. Validate the operation as normal (for example, check for $-prefixed fields in records).

  3. Append the operation to the transaction’s buffer.

  4. Update the transaction’s snapshot to reflect the write.

Note: No file lock is acquired and no file modification occurs during this algorithm. File operations are deferred to the commit a transaction algorithm.

To commit a transaction:
  1. If the transaction’s buffer is empty, return successfully (no file modifications needed).

  2. Let startState be the transaction’s starting state (captured at transaction begin).

  3. Acquire the exclusive file lock.

  4. Reload the file to get the current state. If the file no longer exists, release the lock and return an IO error.

  5. For each key that was written in the transaction:

    1. If the current state for that key differs from startState for that key, release the lock and return a conflict error. Two states for a key differ if: (a) one contains a record for that key and the other does not, or (b) both contain records but the records are not logically equal.

  6. Serialize all operations in the buffer. If serialization fails, release the lock and return an error.

  7. Append all serialized lines to the file as a single write. If the write fails, release the lock and return an error. See § 8.6.1 Partial write recovery for recovery semantics.

  8. Sync the file to disk. If the sync fails, release the lock and return an error. See § 8.6.1 Partial write recovery for recovery semantics.

  9. Update the table’s logical state from the buffer.

    Note: The logical state update occurs before releasing the lock to ensure the in-memory state reflects the committed file contents while the lock is still held.

  10. Release the lock.

To abort a transaction:
  1. Discard the buffer.

  2. Discard the snapshot.

  3. No file lock is held, so none needs to be released.

8.6.1. Partial write recovery

A partial write failure occurs when a write operation is interrupted before completion (for example, due to a process crash, power failure, or filesystem error). This can leave the file with a truncated final line.

Implementations SHOULD use write-ahead techniques to minimize partial write risk. Strategies include:

If a partial write occurs (detectable on subsequent reads by a final line that lacks a trailing newline and fails JSON parsing), a conforming parser SHOULD treat the partially-written operations as uncommitted and discard them during subsequent table opens. The recovery behavior specified in § 5.3 Line structure handles this case: a conforming parser SHOULD ignore a malformed final line that lacks a trailing newline.

When recovering from a truncated final line, a conforming parser SHOULD:

  1. Discard the partial (non-parseable) content

  2. Process all preceding valid lines normally

  3. Optionally emit a diagnostic indicating the recovery

The resulting logical state reflects only operations from complete, valid lines.

A commit that experiences a partial write failure leaves the transaction in an indeterminate state. The caller SHOULD NOT assume that any operations from the transaction were persisted. If the application requires confirmation of successful commit, it SHOULD re-open the table and verify the expected state.

Note: Applications requiring stronger durability guarantees SHOULD consider using atomic file replacement (write to temporary file, sync, rename) for all writes, not just compaction. This approach sacrifices some append-only efficiency for stronger crash consistency.

9. Table operations

9.1. Getting a record

To get a record for key from a table:
  1. Let state be the table’s logical state.

  2. If state contains key, return state[key].

  3. Otherwise, return null.

9.2. Checking for a record

To check for a record for key in a table:
  1. Let state be the table’s logical state.

  2. Return true if state contains key, false otherwise.

9.3. Getting all records

To get all records from a table:
  1. Let state be the table’s logical state.

  2. Let keys be all keys in state, sorted in ascending order.

  3. Let result be an empty list.

  4. For each key in keys:

    1. Append state[key] to result.

  5. Return result.

9.4. Getting all keys

To get all keys from a table:
  1. Let state be the table’s logical state.

  2. Let keys be all keys in state, sorted in ascending order.

  3. Return keys.

9.5. Counting records

To count records in a table:
  1. Let state be the table’s logical state.

  2. Return the number of records in state.

9.6. Finding records

To find records matching predicate in a table:
  1. Let state be the table’s logical state.

  2. Let keys be all keys in state, sorted in ascending order.

  3. Let results be an empty list.

  4. For each key in keys:

    1. Let record be state[key].

    2. If predicate(record) is true, append record to results.

  5. Return results.

9.7. Finding one record

To find one record matching predicate in a table:
  1. Let state be the table’s logical state.

  2. Let keys be all keys in state, sorted in ascending order.

  3. For each key in keys:

    1. Let record be state[key].

    2. If predicate(record) is true, return record.

  4. Return null.

Implementations SHALL signal an error if predicate cannot be invoked as a function. If predicate returns a value that is not a boolean, the implementation SHOULD coerce the value to boolean using the language’s standard truthiness semantics (where null, zero, and empty string typically evaluate to false); alternatively, the implementation MAY signal an error for non-boolean returns.

If the predicate function throws an exception during find records or find one record, the implementation SHALL propagate the exception to the caller without returning partial results. Within a transaction, a predicate exception does not affect the transaction’s state; the transaction remains usable after the exception is handled.

Using find to query records matching a predicate. Given a table with key specifier "id" and the following logical state:

| Key | Record | |-----|--------| | 1 | {"id": 1, "role": "admin", "active": true} | | 2 | {"id": 2, "role": "user", "active": false} | | 3 | {"id": 3, "role": "user", "active": true} |

Calling table.find(record => record.active == true) returns:

[
  {"id": 1, "role": "admin", "active": true},
  {"id": 3, "role": "user", "active": true}
]

Records are returned in ascending key order (1, then 3). Record with key 2 is excluded because active is false.

Calling table.findOne(record => record.role == "user") returns:

{"id": 2, "role": "user", "active": false}

The first matching record by key order is returned. Although record 3 also matches, findOne stops at the first match.

Calling table.findOne(record => record.role == "moderator") returns null because no record matches.

9.8. Putting a record

To put a record record into a table:
  1. If record is not a JSON object, return an error.

  2. Let key be the result of extracting a key from record using the table’s key specifier.

  3. If extraction fails, return an error.

  4. If the key length of key exceeds the implementation’s maximum, return a limit error.

  5. If record contains any field whose name begins with $, return an error.

  6. Serialize record to a JSON line using deterministic serialization.

  7. If the record size exceeds the implementation’s maximum, return a limit error.

  8. Acquire the exclusive file lock.

  9. If auto-reload is enabled and the file has been modified since last load, reload the file.

  10. Append the line (followed by newline) to the table’s file.

  11. Update the table’s logical state: set state[key] to record.

  12. Release the lock.

9.9. Deleting a record

To delete a record for key from a table:
  1. If key is not a valid key, return a key error indicating invalid key type.

  2. If the key specifier is a tuple with more than one element:

    1. If key is not a tuple with the same number of elements as the key specifier, return a key error.

  3. If the key specifier is a string or a single-element tuple:

    1. If key is a tuple, return a key error.

  4. Let existed be true if the table’s logical state contains key, false otherwise.

  5. Let tombstone be a new object.

  6. Set tombstone["$deleted"] to true.

  7. If the key specifier is a string:

    1. Let field be the key specifier.

    2. Set tombstone[field] to key.

  8. If the key specifier is a tuple with exactly one element:

    1. Let field be the single element of the key specifier.

    2. Set tombstone[field] to key.

  9. If the key specifier is a tuple with more than one element:

    1. For each field in the key specifier and corresponding value in key:

      1. Set tombstone[field] to value.

  10. Assert: The key specifier is one of: a string, a single-element tuple, or a multi-element tuple. The preceding steps are exhaustive.

  11. Acquire the exclusive file lock.

  12. If auto-reload is enabled and the file has been modified since last load, reload the file.

  13. Serialize tombstone to a JSON line using deterministic serialization.

  14. Append the line (followed by newline) to the table’s file.

  15. Update the table’s logical state: remove key from state.

  16. Release the lock.

  17. Return existed.

Note: The existed return value is informational and reflects the table’s state at the time the delete operation began, not at the time the tombstone was written. In concurrent scenarios with auto-reload enabled, the record could have been created or deleted by another process between the existence check and the file modification. Applications requiring an authoritative answer about whether a record was actually deleted can use a transaction, which provides snapshot isolation.

Note: Deleting a non-existent key is valid and writes a tombstone to the file. This idempotent design allows delete operations to be safely replayed (for example, during synchronization or recovery) without requiring the caller to first check whether the key exists. The tombstone has no effect on the logical state if the key was already absent.

9.10. Clearing all records

To clear all records from a table:
  1. Acquire the exclusive file lock.

  2. Let lines be an empty list.

  3. If the table has a header:

    1. Serialize the header to a JSON line.

    2. Append the header line to lines.

  4. Write lines to a temporary file in the same directory, with each line followed by a newline (or write an empty file if no header).

  5. Sync the temporary file to disk (fsync or equivalent).

  6. Atomically rename the temporary file to replace the table’s file. (On POSIX systems, this is the rename() system call; on Windows, MoveFileEx with MOVEFILE_REPLACE_EXISTING.)

  7. Set the table’s logical state to an empty map.

  8. Release the lock.

This produces an empty file (or a file containing only the header).

9.11. Compacting a table

To compact a table:
  1. Acquire the exclusive file lock.

  2. Reload the file to ensure the logical state reflects any writes that occurred before the lock was acquired.

  3. Let state be the table’s logical state.

  4. Let keys be all keys in state, sorted in ascending order.

  5. Let lines be an empty list.

  6. If the table has a header:

    1. Serialize the header to a JSON line.

    2. Append the header line to lines.

  7. For each key in keys:

    1. Let record be state[key].

    2. Serialize record to a JSON line using deterministic serialization.

    3. Append the line to lines.

  8. Write lines to a temporary file in the same directory, with each line followed by a newline.

  9. Sync the temporary file to disk (fsync or equivalent).

  10. Atomically rename the temporary file to replace the table’s file.

  11. Release the lock.

Compaction produces a file with one line per live record (plus optional header), sorted by key, with no tombstones or historical operations.

Note: When multiple processes attempt concurrent compaction, file locking serializes the operations. The second process will reload after acquiring the lock and can find the file already compacted; implementations can detect this by comparing file size or line count before and after reload, and skip redundant compaction.

Before compaction, a file might contain the full operation history with non-alphabetical key ordering in records:
{"$jsonlt": {"version": 1, "key": "id"}}
{"role": "user", "id": "alice", "team": "eng"}
{"id": "bob", "role": "user"}
{"role": "admin", "id": "alice", "team": "eng"}
{"id": "charlie", "role": "user"}
{"$deleted": true, "id": "bob"}
{"id": "charlie", "role": "moderator"}

After compaction, the file contains only the current state, sorted by key, with deterministic serialization applied (keys sorted alphabetically):

{"$jsonlt": {"version": 1, "key": "id"}}
{"id": "alice", "role": "admin", "team": "eng"}
{"id": "charlie", "role": "moderator"}

The historical operations (alice’s initial role, bob’s record, charlie’s initial role) and the tombstone for bob are removed. Note how alice’s record now has fields in alphabetical order (id, role, team).

On systems where atomic rename is not available or fails (for example, renaming across filesystems), a conforming generator SHALL use an alternative strategy that preserves atomicity, such as writing to a new file and updating a pointer, or SHALL report an error.

10. Concurrency

10.1. File locking

When multiple processes access the same table file, a conforming generator SHALL ensure that concurrent write operations do not corrupt the file or produce malformed output. A conforming generator SHALL ensure that each write operation produces a complete, valid line followed by a newline character, even when other processes are simultaneously reading or writing the same file.

To achieve this, implementations SHOULD use advisory file locking to coordinate access. The specific mechanism is implementation-defined (for example, fcntl, flock, or platform-specific APIs). Write operations SHOULD acquire an exclusive lock before modifying the file and hold it until the write completes and the file is synced. Read operations that may trigger a reload SHOULD acquire a shared lock if the platform supports them, or briefly acquire an exclusive lock, to avoid reading a partially-written line.

Note: The testable outcome is file integrity under concurrent access. The specific locking mechanism is implementation guidance; implementations MAY use alternative coordination mechanisms that achieve the same outcome.

Transactions use optimistic concurrency: no lock is held during the transaction, but the exclusive lock is acquired at commit time to perform conflict detection and write the buffered operations atomically.

10.2. Auto-reload behavior

When auto-reload is enabled, a conforming parser or conforming generator SHALL check the file’s modification time (mtime) before read operations return data. If the mtime has changed since the last load, the implementation SHALL reload the file from disk before answering the read.

Inside a transaction, auto-reload occurs only at transaction start. Subsequent reads within the transaction see the snapshot state.

Note: The mtime check adds one stat system call per read operation.

Note: Some filesystems have coarse mtime resolution (for example, HFS+ has 1-second granularity). Implementations SHOULD additionally compare file size to detect changes that occur within the same mtime window. Applications requiring stronger consistency guarantees SHOULD use explicit reload calls or transactions rather than relying on auto-reload.

10.3. Implementation testing guidance

This section is non-normative.

Several normative requirements in this specification are difficult or impossible to test in a declarative, language-agnostic conformance suite:

The [JSONLT-TESTS] suite focuses on format parsing and state computation—behaviors that can be verified with deterministic inputs and outputs. Implementations SHOULD include tests for these platform-specific behaviors in their own test suites, potentially using multi-process test harnesses or platform-specific mocking frameworks.

11. Size and complexity

The key length of a key is the number of bytes in its JSON representation when encoded as UTF-8:

The record size of a record is the number of bytes in its JSON serialization using deterministic serialization, encoded as UTF-8.

A conforming parser or conforming generator SHALL support at minimum:

Implementations MAY support larger limits and SHOULD document their actual limits. Supporting larger values does not affect conformance status.

Note: These limits balance practical needs with platform feasibility. The 1024-byte key limit aligns with common database index key limits. The 1 MiB record size accommodates most practical use cases while preventing memory exhaustion. The 64-level nesting depth exceeds typical JSON usage patterns while remaining within the capabilities of most JSON parsers (some have lower defaults, such as Ruby’s default of 19). The 16-element tuple limit aligns with database compound key practices (SQL Server: 16, PostgreSQL: 32).

Note: The [JSONLT-TESTS] suite includes tests for key length and tuple element limits, which use small test data. Record size limit testing (1 MiB) requires large test files that are impractical for a declarative test suite; implementations SHOULD include record size limit tests in their own test suites.

A conforming parser or conforming generator SHALL signal a limit error when a documented limit is exceeded. Implementations SHALL NOT silently truncate, corrupt, or discard data that exceeds limits.

Key length calculation for a string key:
{"$jsonlt": {"version": 1, "key": "id"}}
{"id": "user_12345_account_settings_preferences", "data": "example"}

The key "user_12345_account_settings_preferences" has a key length of 41 bytes (39 characters + 2 quote bytes). The 1024-byte limit supports keys up to approximately 1022 characters (for ASCII strings without escape sequences).

Tuple key at the 16-element limit:
{"$jsonlt": {"version": 1, "key": ["a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p"]}}
{"a":1,"b":2,"c":3,"d":4,"e":5,"f":6,"g":7,"h":8,"i":9,"j":10,"k":11,"l":12,"m":13,"n":14,"o":15,"p":16,"data":"x"}

This 16-element tuple key is at the maximum allowed limit. A 17th element would cause a limit error.

JSON nesting depth (counting levels):
{"id": "x", "level1": {"level2": {"level3": {"value": 1}}}}

The nesting depth is 4 levels: the root object (1), level1 object (2), level2 object (3), and level3 object (4). The 64-level limit allows substantial nesting while preventing stack overflow from deeply recursive structures.

The following are not normatively constrained:

The following complexity characteristics are informative guidance for typical implementations:

12. Security considerations

JSONLT files are plain text and offer no encryption or access control beyond filesystem permissions. Applications storing sensitive data SHOULD implement encryption, access controls, or other protections at the application or filesystem level.

Note: The following guidance helps protect against resource exhaustion when processing untrusted input:

Path traversal: A conforming parser or conforming generator SHOULD sanitize or validate file paths to prevent directory traversal attacks.

Symbolic links: When the table path is a symbolic link, a conforming generator performing compaction SHOULD either follow the link (writing the compacted file to the link target) or reject the operation with an error. Compaction SHOULD NOT replace a symbolic link with a regular file. Implementations SHOULD document their symbolic link handling.

13. Privacy considerations

This specification defines a data format with no inherent privacy implications. Applications using this format are responsible for handling any sensitive data they choose to encode in accordance with applicable privacy requirements.

A conforming parser or conforming generator SHOULD NOT log or expose record contents in error messages or debugging output unless explicitly configured to do so.

14. Internationalization considerations

A conforming generator SHALL encode JSONLT files as UTF-8, supporting the full Unicode character set.

Key comparison is based on Unicode code points without normalization. Applications requiring normalization-insensitive key matching SHOULD normalize keys to NFC before storage.

String values may contain any valid Unicode content, escaped according to [RFC8259] string escaping rules.

15. Accessibility considerations

This specification defines a data format with no direct user interface implications. Applications presenting data in this format are responsible for accessible rendering.

16. Extension mechanism

Field names beginning with $ are reserved for this specification and future extensions. A conforming generator SHALL reject records containing field names beginning with $.

Future versions of this specification MAY define additional $-prefixed fields in records (beyond the currently-defined $deleted). For forward compatibility, a conforming parser SHOULD preserve unrecognized $-prefixed fields when reading files, rather than stripping them. This allows files written by newer implementations to be read (and re-written during compaction) by older implementations without data loss. If an unrecognized $-prefixed field conflicts with this specification’s semantics (for example, an unknown field in a tombstone), a conforming parser MAY reject the file.

The header’s meta field provides a space for application-defined metadata that does not conflict with the specification.

17. Implementation mapping

This section is non-normative.

This appendix provides guidance on mapping the abstract types and constructs defined in this specification to concrete implementations in various programming languages. The notation and type system are designed to be language-agnostic; implementations can adapt them to idiomatic constructs in their target language.

17.1. Basic types

Integer: Map to the platform’s standard integer type. Implementations need to support the full range of JSON-safe integers (−(253)+1 to (253)−1, that is, ±9,007,199,254,740,991). Larger integer types are acceptable; smaller types that cannot represent this range are not conforming.

Note: JSONLT’s integer constraints align with [RFC7493] (I-JSON), which recommends the same range for interoperable JSON integers.

String: Map to the platform’s standard UTF-8 string type. All string comparisons are based on Unicode code points.

Boolean: Map to the platform’s standard boolean type (true/false, True/False, etc.).

17.2. Compound types

List<T>: Map to the platform’s standard ordered sequence type (array, list, vector, slice, etc.). The element type T is mapped according to these same rules.

Tuple: Map to the platform’s tuple type if available. Languages without native tuples can use arrays or custom structures. A tuple of (String, Integer) represents a compound key with two elements.

Map: The logical state is a map from keys to records. Map to the platform’s standard associative container (dictionary, hash map, object, etc.). Implementations can use ordered maps if key ordering is important for iteration.

17.3. Null and optional values

T | Null: Represents a value that may be absent. Map to the platform’s standard optional or nullable type:

17.4. Predicates

The find and findOne operations accept a predicate function. Map to the platform’s standard callable type:

The predicate receives a Record and returns a Boolean indicating whether the record matches.

17.5. Thread safety

Thread safety for concurrent access within a single process is an implementation concern and is not specified normatively. Implementations SHOULD document their thread safety properties and MAY provide options for enabling or disabling internal locking. Implementations MAY use synchronization mechanisms such as mutex locks, read-write locks, or atomic operations based on the target platform’s threading model.

Note: This specification addresses inter-process concurrency through file locking (§ 10.1 File locking) because file integrity is an interoperability concern—two processes need to coordinate to avoid corrupting shared files. Intra-process thread safety, by contrast, is an internal implementation detail that does not affect file format interoperability.

18. Profile requirement summary

This section is non-normative.

This appendix provides a summary of normative requirements by conformance profile for quick reference. Requirements are identified by the section containing them. This summary restates requirements defined normatively in the referenced sections; in case of any discrepancy, the normative sections govern.

18.1. Parser requirements

A conforming parser SHALL:

A conforming parser SHOULD:

18.2. Generator requirements

A conforming generator SHALL:

A conforming generator SHOULD:

18.3. Both profiles

Both conforming parser and conforming generator SHALL:

Both profiles MAY:

19. Acknowledgments

This section is non-normative.

This specification was developed with input from contributors who reviewed drafts and provided feedback. The design was informed by related work including [BEADS], which uses JSONL for git-backed structured storage.

The notation conventions used in the § 7 API section were inspired by [RFC9622], which provides a model for describing abstract interfaces in a language-agnostic manner.

19.1. AI assistance disclosure

The development of this specification involved the use of AI language models, specifically Claude (Anthropic). AI tools contributed to the following aspects of this work:

All normative requirements, technical design decisions, and final specification text were determined by human authors. AI-generated content was reviewed, edited, and validated against the specification’s design goals. The authors take full responsibility for the technical accuracy and correctness of this document.

This disclosure is provided in the interest of transparency regarding modern specification development practices.

Conformance

Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.

All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]

Examples in this specification are introduced with the words “for example” or are set apart from the normative text with class="example", like this:

This is an example of an informative example.

Informative notes begin with the word “Note” and are set apart from the normative text with class="note", like this:

Note, this is an informative note.

Index

Terms defined by this specification

Terms defined by reference

References

Normative References

[INFRA]
Anne van Kesteren; Domenic Denicola. Infra Standard. Living Standard. URL: https://infra.spec.whatwg.org/
[JSONL]
Ian Ward. JSON Lines. Accessed December 2025. URL: https://jsonlines.org/
[RFC2119]
S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. URL: https://www.rfc-editor.org/rfc/rfc2119
[RFC3629]
F. Yergeau. UTF-8, a transformation format of ISO 10646. URL: https://www.rfc-editor.org/rfc/rfc3629
[RFC8174]
B. Leiba. Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words. URL: https://www.rfc-editor.org/rfc/rfc8174
[RFC8259]
T. Bray. The JavaScript Object Notation (JSON) Data Interchange Format. URL: https://www.rfc-editor.org/rfc/rfc8259

Informative References

[BEADS]
Steve Yegge. Beads: Distributed, git-backed graph issue tracker for AI agents. URL: https://github.com/steveyegge/beads
[IEEE754]
IEEE Standard for Floating-Point Arithmetic. 2019. URL: https://ieeexplore.ieee.org/document/8766229
[JSON-SCHEMA]
A. Wright; et al. JSON Schema: A Media Type for Describing JSON Documents. URL: https://json-schema.org/draft/2020-12/json-schema-core
[JSONLT-TESTS]
Tony Burns. JSONLT 1.0 Conformance Test Suite. URL: https://spec.jsonlt.org/tests/
[RFC6838]
N. Freed; J. Klensin; T. Hansen. Media Type Specifications and Registration Procedures. URL: https://www.rfc-editor.org/rfc/rfc6838
[RFC7493]
T. Bray. The I-JSON Message Format. URL: https://www.rfc-editor.org/rfc/rfc7493
[RFC8785]
A. Rundgren; B. Jordan; S. Erdtman. JSON Canonicalization Scheme (JCS). URL: https://www.rfc-editor.org/rfc/rfc8785
[RFC9622]
B. Trammell; et al. A TAPS Interface to Transport Services. URL: https://www.rfc-editor.org/rfc/rfc9622