Row Codecs
Reading rows from JDBC means calling rs.getInt(1), rs.getString(2), rs.getTimestamp(3) — column by column, in the right order, with the right types. Get any of it wrong and you get a ClassCastException at runtime. Add a column to your query and you silently shift every index after it.
A RowCodec<T> replaces all of that with a single declaration: you list the database types and a constructor, and the codec does the rest. Once defined, the same codec drives everything the library does with your type:
- Reading — decodes rows from a
ResultSet(queries) or aCallableStatement(stored procedures) - Writing — encodes values into a
PreparedStatementfor inserts, updates, and batch operations - Streaming — feeds rows into the PostgreSQL COPY protocol for high-throughput inserts
- JSON — round-trips your type to and from JSON objects using column names as keys
- Analysis — Query Analysis inspects the codec's types to verify them against the database schema
You define the mapping once, and it propagates everywhere.
Foundations JDBC has two kinds of row codecs:
- Named codecs (
RowCodecNamed) — track column names, recommended for most use cases. Created viaRowCodec.namedBuilder(). - Positional codecs (
RowCodecUnnamed) — track only column positions, for quick single/multi-column reads. Created viaRowCodec.of(type)orRowCodec.builder().
The same codec drives reads, writes, JSON encoding, composite types, and query analysis.
Named Row Codecs
A named row codec tracks both types and column names. This is the recommended default — the small overhead of naming fields pays for itself quickly:
- Kotlin
- Java
- Scala
data class Product(val id: Int, val name: String, val price: BigDecimal, val createdAt: Instant)
val productCodec: RowCodecNamed<Product> =
RowCodec.namedBuilder<Product>()
.field("id", PgTypes.int4, Product::id)
.field("name", PgTypes.text, Product::name)
.field("price", PgTypes.numeric, Product::price)
.field("created_at", PgTypes.timestamptz, Product::createdAt)
.build(::Product)
// Column list for SQL — no hand-written strings to keep in sync
val allProducts =
sql { "SELECT ${productCodec.columnList} FROM product" }
record Product(Integer id, String name, BigDecimal price, Instant createdAt) {}
RowCodecNamed<Product> productCodec =
RowCodec.<Product>namedBuilder()
.field("id", PgTypes.int4, Product::id)
.field("name", PgTypes.text, Product::name)
.field("price", PgTypes.numeric, Product::price)
.field("created_at", PgTypes.timestamptz, Product::createdAt)
.build(Product::new);
// Column list for SQL — no hand-written strings to keep in sync
Fragment allProducts =
Fragment.of("SELECT ").append(productCodec.columnList()).append(" FROM product");
case class Product(
id: Int,
name: String,
price: BigDecimal,
createdAt: Instant
)
val productCodec: RowCodecNamed[Product] =
RowCodec
.namedBuilder[Product]()
.field("id", PgTypes.int4)(_.id)
.field("name", PgTypes.text)(_.name)
.field("price", PgTypes.numeric)(_.price)
.field("created_at", PgTypes.timestamptz)(_.createdAt)
.build(Product.apply)
// Column list — no hand-written strings to keep in sync
val allProducts =
sql"SELECT ${productCodec.columnList} FROM product"
Having names lets you:
columnList()— emit column names as aFragmentfor SELECT clauses, so queries stay in sync with the codeccolumnNames()— get column names as a listFragment.insertInto(table, codec)— generate a complete INSERT template from the codec's column metadataFragment.insertIntoReturning(table, codec)— same, with aRETURNINGclause that parses the inserted row backfragment.row(codec, value)— emit an object's fields as comma-separated parameters for custom INSERT patternsDbJsonRow.jsonObject(codec)— build a JSON object codec with column names as keys
Single-Column Codec
For single-column queries, use the simpler of() factory:
- Kotlin
- Java
- Scala
val idCodec: RowCodec<Int> = RowCodec.of(PgTypes.int4)
RowCodec<Integer> idCodec = RowCodec.of(PgTypes.int4);
val idCodec: RowCodec[Int] = RowCodec.of(PgTypes.int4)
For a single named column (preserving the column name for joins), use RowCodec.ofNamed("name", type).
Nullable Columns
Use .opt() to wrap a type for nullable columns:
- Kotlin
- Java
- Scala
data class Person(val id: Int, val name: String, val createdAt: Instant?)
val personCodec: RowCodec<Person> =
RowCodec.builder<Person>()
.field(PgTypes.int4, Person::id)
.field(PgTypes.text, Person::name)
.field(PgTypes.timestamptz.opt(), Person::createdAt)
.build(::Person)
record Person(Integer id, String name, Optional<Instant> createdAt) {}
RowCodec<Person> personCodec =
RowCodec.<Person>builder()
.field(PgTypes.int4, Person::id)
.field(PgTypes.text, Person::name)
.field(PgTypes.timestamptz.opt(), Person::createdAt)
.build(Person::new);
case class Person(id: Int, name: String, createdAt: Option[Instant])
val personCodec: RowCodec[Person] = RowCodec
.builder[Person]()
.field(PgTypes.int4)(_.id)
.field(PgTypes.text)(_.name)
.field(PgTypes.timestamptz.opt)(_.createdAt)
.build(Person.apply)
Composing Codecs for Joins
Row codecs compose for joins. Given a productCodec and a categoryCodec, combine them with .join() or .leftJoin():
- Kotlin
- Java
- Scala
// Inner join — both sides always present
val innerJoined: RowCodec<Pair<ProductRow, CategoryRow>> =
productCodec.join(categoryCodec)
// Left join — right side is nullable
val leftJoined: RowCodec<Pair<ProductRow, CategoryRow?>> =
productCodec.leftJoin(categoryCodec)
// Inner join — both sides always present
RowCodec<Tuple.Tuple2<ProductRow, CategoryRow>> innerJoined = productCodec.join(categoryCodec);
// Left join — right side is Optional (nullable in Kotlin, Option in Scala)
RowCodec<Tuple.Tuple2<ProductRow, Optional<CategoryRow>>> leftJoined =
productCodec.leftJoin(categoryCodec);
// Inner join — both sides always present
val innerJoined: RowCodec[(ProductRow, CategoryRow)] =
productCodec.join(categoryCodec)
// Left join — right side is Option
val leftJoined: RowCodec[(ProductRow, Option[CategoryRow])] =
productCodec.leftJoin(categoryCodec)
The result type is Tuple2<A, B> in Java (with ._1() and ._2() accessors), Pair<A, B> in Kotlin, and a tuple (A, B) in Scala. Left join wraps the right side in Optional (or nullable in Kotlin, Option in Scala).
Named codecs also have .join() and .leftJoin() methods that preserve column names through the composition, so the combined codec can still be used with columnList(), Fragment.insertInto(), and JSON encoding.
The same Tuple types appear whenever the library needs to return multiple values without a dedicated record type — RowCodec.of(type1, type2, ...) for multi-column ad-hoc queries, .combine() for composed operations, and .join() for joins all return TupleN. Accessors are 1-based: ._1(), ._2(), ._3(), etc.
This is why row codecs use index-based reading rather than column names. When you join two tables, both may have columns named id or name. Column-name-based reading would silently return the wrong value. Index-based reading makes composition safe — each codec reads its own slice of columns in sequence, and name clashes are irrelevant.
Disambiguating duplicate column names with .aliased()
If the two sides of a named join share any column name (the common id, name, created_at case), the joined codec's columnList() concatenates the names and throws:
RowCodecNamed.columnList() has duplicate column names [id, name]
(typically from a join of codecs sharing a column name). Call .aliased("x")
on each side before composing, ...
Call .aliased("alias") on each side before composing. Aliasing prefixes every column name with alias., so the joined codec has unambiguous entries (e.id, e.name, ..., d.id, d.name) and columnList() is usable directly in the SELECT:
var joined = empCodec.aliased("e").leftJoin(deptCodec.aliased("d"));
Fragment.of("SELECT ")
.append(joined.columnList())
.append(" FROM emp e LEFT JOIN dept d ON e.department = d.name")
.query(joined.all());
val joined = empCodec.aliased("e").leftJoin(deptCodec.aliased("d"))
sql { "SELECT ${joined.columnList} FROM emp e LEFT JOIN dept d ON e.department = d.name" }
.query(joined.all())
An aliased codec is SELECT-friendly but not INSERT-friendly — use the unaliased codec for Fragment.insertInto(table, codec).
Result Modes
A codec defines the shape of a row. Result modes define how many rows the query returns:
| Mode | Returns | Behavior |
|---|---|---|
.all() | List<T> | Collect all rows |
.exactlyOne() | T | Expect exactly one row, throw if 0 or 2+ |
.maxOne() | Optional<T> | Return 0 or 1 rows |
Data-Driven Inserts
Fragment.insertIntoReturning() generates a complete INSERT statement from a named codec — column list, parameter placeholders, and RETURNING clause. Pass column names to except to skip columns with database defaults:
- Kotlin
- Java
- Scala
fun insert(product: Product): Product =
Fragment.insertIntoReturning("product", productCodec)
.on(product)
.run(conn)
// Skip columns with database defaults — pass column names to except
fun insertWithDefault(product: Product): Product =
Fragment.insertIntoReturning("product", productCodec, "id")
.on(product)
.run(conn)
Product insert(Product product) {
return Fragment.insertIntoReturning("product", productCodec).on(product).run(conn);
}
// Skip columns with database defaults — pass column names to except
Product insertWithDefault(Product product) {
return Fragment.insertIntoReturning("product", productCodec, "id").on(product).run(conn);
}
def insert(product: Product): Product =
Fragment
.insertIntoReturning("product", productCodec)
.on(product)
.run(conn)
// Skip columns with database defaults — pass column names to except
def insertWithDefault(product: Product): Product =
Fragment
.insertIntoReturning("product", productCodec, "id")
.on(product)
.run(conn)
Generated Keys Inserts
For databases that don't support RETURNING (DB2, Oracle, SQL Server, MariaDB), use Fragment.insertIntoGeneratedKeys(). It works like insertIntoReturning but uses JDBC's getGeneratedKeys() API to read back the generated columns:
- Kotlin
- Java
- Scala
// For databases without RETURNING (DB2, Oracle, SQL Server, MariaDB):
fun insertGeneratedKey(product: Product): Int =
Fragment.insertIntoGeneratedKey(
"product", productCodec, "id",
RowCodec.of(PgTypes.int4).exactlyOne())
.on(product)
.run(conn)
// For databases without RETURNING (DB2, Oracle, SQL Server, MariaDB):
int insertGeneratedKey(Product product) {
return Fragment.insertIntoGeneratedKeys(
"product",
productCodec,
new String[] {"id"},
RowCodec.of(PgTypes.int4).exactlyOne(),
"id")
.on(product)
.run(conn);
}
// For databases without RETURNING (DB2, Oracle, SQL Server, MariaDB):
def insertGeneratedKey(product: Product): Int =
Fragment
.insertIntoGeneratedKeys("product", productCodec, Array("id"), RowCodec.of(PgTypes.int4).exactlyOne(), "id")
.on(product)
.run(conn)
Pass the generated column names as the third argument, and the same column names to except so they're excluded from the INSERT's column list and VALUES clause.
Positional Codecs
Use positional codecs when column names aren't needed — e.g., single-use queries or performance-sensitive paths. You build a positional codec by listing .field() calls — one per column, in SELECT order — and finishing with .build(constructor):
- Kotlin
- Java
- Scala
data class Person(val id: Int, val name: String, val createdAt: Instant)
val personCodec: RowCodec<Person> =
RowCodec.builder<Person>()
.field(PgTypes.int4, Person::id)
.field(PgTypes.text, Person::name)
.field(PgTypes.timestamptz, Person::createdAt)
.build(::Person)
record Person(Integer id, String name, Instant createdAt) {}
RowCodec<Person> personCodec =
RowCodec.<Person>builder()
.field(PgTypes.int4, Person::id)
.field(PgTypes.text, Person::name)
.field(PgTypes.timestamptz, Person::createdAt)
.build(Person::new);
case class Person(id: Int, name: String, createdAt: Instant)
val personCodec: RowCodec[Person] = RowCodec
.builder[Person]()
.field(PgTypes.int4)(_.id)
.field(PgTypes.text)(_.name)
.field(PgTypes.timestamptz)(_.createdAt)
.build(Person.apply)
Each .field() takes a DbType that models the exact database column type. DbType<A> knows how to read a value of type A from a ResultSet and write it to a PreparedStatement — no JDBC integer codes, no manual rs.getX() calls. Each supported database has its own set (PgTypes, DuckDbTypes, MariaDbTypes, etc.) with full-precision mappings for every type. See Database Types for the complete catalog.
The builder is fully type-safe: the constructor receives exactly the types you declared, with no casts. Columns are read by index — the order of .field() calls must match the column order in your SELECT.