I've just started using this library to check that parsed JSON objects conform to the types I define. I'm really enjoying it, but I'm afraid I've hit a performance snag.
Everything works fine with small tests but when I ran it on a slightly bigger input file (~10MB) I noticed that it was really slow.
I've tried the latest version and the beta, and check vs parse, with similar results. The beta is faster, but I don't see a big difference between check and parse.
After profiling the check version in the beta, I'm seeing calls to .check taking in the 500ms-1100ms range per object.

Are those numbers typical, or am I doing something wrong with the schema definitions?
My schema definitions look like:
const EntryJsonSchema = z.object({
a: z.string().optional().nullable(),
b: z.string().optional().nullable(),
id: z.string().optional().nullable(),
creation: z.string().optional().nullable(),
content: ContentJsonSchema.optional().nullable(),
labels: z.string().array().optional().nullable(),
answers: AnswerJsonSchema.array().optional().nullable(),
results: ResultJsonSchema.array()
.optional()
.nullable(),
});
const ContentJsonSchema = z.object({
id: z.string().optional().nullable(),
title: z.string().optional().nullable(),
version: z.union([z.number(), z.string()]).optional().nullable(),
});
const AnswerJsonSchema = z.object({
key: z.string().optional().nullable(),
value: z.any().optional().nullable(),
});
const ResultJsonSchema = z.object({
key: z.string().optional().nullable(),
value: z.any().optional().nullable(),
});
I'm really hoping that there is a way to speed it up, as this is too expensive for the use case where I'll have to process files with 100k+ objects.
Thanks!
I've just started using this library to check that parsed JSON objects conform to the types I define. I'm really enjoying it, but I'm afraid I've hit a performance snag.
Everything works fine with small tests but when I ran it on a slightly bigger input file (~10MB) I noticed that it was really slow.
I've tried the latest version and the beta, and
checkvsparse, with similar results. The beta is faster, but I don't see a big difference betweencheckandparse.After profiling the

checkversion in the beta, I'm seeing calls to.checktaking in the 500ms-1100ms range per object.Are those numbers typical, or am I doing something wrong with the schema definitions?
My schema definitions look like:
I'm really hoping that there is a way to speed it up, as this is too expensive for the use case where I'll have to process files with 100k+ objects.
Thanks!