Turn model reasoning into a workflow decision.

Conseq is the decision API for AI workers. Start with one public prediction call: `POST /api/v2/predict`. Branch on the returned `decision`, execute only when your policy allows it, and keep traceability, fallback behavior, and outcome verification behind the same contract.

Public API operations
  • Start with `POST /api/v2/predict` for the canonical HTTP integration path
  • Use the returned `decision` field as the machine-readable branch point
  • Log `predictionId` so operators can inspect what Conseq knew before execution
  • Record actual outcomes later through `POST /api/v1/outcomes`
  • `GET /api/v1/jobs/:jobId` remains the current async polling path
  • The JS/TS SDK wraps the same predict-first flow
  • Supported pricing action shapes: set price, percent markdown, increase, promo start, promo end
  • Context inputs: inventory pressure, seasonality, elasticity hints, competitor behavior, promo flags

Credit model: one pricing prediction currently spends one API credit. Outcome recording and prediction reads do not spend credits. The commercial model stays tied to prediction usage while Conseq is still a prediction-first product.

Conseq does not claim completed-action pricing yet. That model only becomes honest if the product later executes or directly governs those actions in production.

Canonical prediction transport: `v2`. Stable pricing contract: `v1`.

The response is meant to control a workflow, not just explain one.

Model call

Useful for one-off reasoning, but it does not automatically give you a stable branch point, traceability, or measured follow-through.

Conseq predict call

Returns a machine-readable `decision`, stores a prediction record, and keeps fallback behavior and verification tied to the same workflow step.

Why this matters

The value is not one clever answer. It is stopping a bad policy from repeating across high-volume actions while keeping evidence of what the system knew at the time.

Fast path and fallback

  • Conseq targets `1200ms` for normal synchronous pricing checks.
  • Clients should treat `1500ms` as the recommended timeout budget.
  • If Conseq does not return in time, default to `ESCALATE`.

If Conseq is slow or unavailable, do not auto-approve the pricing action. Escalate it to human review or keep the current price in place.

These values are also exposed by `serviceStatus` and the REST response headers for `POST /api/v2/predict`.

Safe behavior when Conseq is slow, unavailable, or uncertain.

  • `ALLOW`: continue only if your own policy permits automatic execution.
  • `WARN`: stop automation and route the action into review.
  • `ESCALATE`: keep the current state and escalate to a human.
  • Timeout, network error, or `5xx`: behave exactly like `ESCALATE`.
  • `429`, `INSUFFICIENT_CREDITS`, or daily quota failures: stop and restore capacity before retrying.
  • `INVALID_INPUT` or `UNSUPPORTED_CAPABILITY`: fix the request instead of retrying blindly.

Predict first. Branch second. Execute last.

  • Call `predict` before your worker, operator, or workflow makes a costly change.
  • If the response is `ALLOW`, continue. If it is `WARN` or `ESCALATE`, stop or review.
  • Use the same request envelope in HTTP, the sandbox, and the SDK.
  • Keep outcomes, calibration, and connectors behind the same public contract.

Confidence and tail risk are first-class outputs. Do not let polished reasoning text override a low-confidence prediction or a `WARN` or `ESCALATE` decision.

Generic request envelope

{
  "capability": "PRICING",
  "actionType": "PERCENT_MARKDOWN",
  "companyId": "northstar-commerce",
  "actor": {
    "kind": "AI_AGENT",
    "id": "pricing-worker",
    "label": "Production pricing worker"
  },
  "target": {
    "type": "SKU",
    "id": "BUNDLE-15",
    "name": "Electronics Bundle"
  },
  "context": {
    "category": "electronics",
    "oldPrice": 100,
    "priceChangePercent": 15,
    "promoDurationDays": 7,
    "inventoryPressure": 0.7,
    "seasonalityIndex": 1.15,
    "priceElasticityHint": 0.68,
    "recentCompetitorBehavior": "MATCHING_DISCOUNTS",
    "promoAlreadyActive": false,
    "recentDailyRevenue": 4200,
    "recentDailyUnits": 42,
    "marginRate": 0.46,
    "competitorVolatility": 0.72,
    "currentConversionRate": 0.034
  }
}

Backend GraphQL naming still exists underneath

The public product language is `predict`, but the AppSync contract still exposes `predictConsequence` while the HTTP `v2` surface is being generalized.

mutation PredictConsequence($input: ConsequencePredictionInput!) {
  predictConsequence(input: $input) {
    prediction {
      id
      capability
      modelVersion
      reasoningVersion
      actionType
      customerAccountId
      customerAccountName
      requestSource
      requestCallerId
      requestCallerEmail
      requestApiKeyId
      requestApiKeyLabel
      expectedValue
      expectedValueLabel
      tailRisk
      confidence
      actionPolicy
      immediateOutcome
      secondOrderEffects
      thirdOrderEffects
      recommendation
      reasoning
      outcomeStatus
      accuracyStatus
      outcomeSource
      outcomeActorId
      outcomeActorLabel
    }
    job {
      id
      status
      pollAfterMs
      predictionId
    }
    error {
      code
      message
    }
  }
}

Poll background predictions

query GetConsequenceJob($id: ID!) {
  getConsequenceJob(id: $id) {
    id
    status
    pollAfterMs
    predictionId
    lastError
  }
}

Sync for preflight, async for heavier work

  • `SYNC` is for low-latency pre-execution checks.
  • `ASYNC` queues a background job and returns `job.id`, `status`, and `pollAfterMs`.
  • Once the job is `COMPLETED`, use `predictionId` to load the finished prediction.

Conseq should beat a frozen frontier-prompt baseline, not hide from it.

The pricing capability is now benchmarked against held-out cases with a frozen frontier-prompt baseline. The point is not to claim generic model superiority. The point is to show that Conseq makes better pricing decisions against realized outcomes.

Learning infrastructure stays off the critical path.

Customers should be able to understand Conseq as one predict call. Outcome connectors, pattern review, evaluation, and changelog details still exist, but they support model quality rather than define the primary integration story.

Simplicity is part of the product.

  • Developers integrate one prediction call instead of learning every internal subsystem.
  • Outcome recording, verification, and calibration can continue improving quality behind the same contract.
  • Supporting ops surfaces remain available without becoming the primary onboarding path.
Response shape

`v2` returns a flat decision envelope instead of leaking the internal prediction/job storage shape.

{
  "predictionId": "pred_123",
  "decision": "ESCALATE",
  "expectedValue": -18400,
  "tailRisk": 0.36,
  "confidence": 0.77,
  "immediateOutcome": "$8.7K of short-term revenue lift if the markdown increases unit demand.",
  "secondOrderEffects": [
    "Competitors are more likely to match the lower price and compress category margin."
  ],
  "thirdOrderEffects": [
    "Customers begin anchoring on the lower price and delay future full-price purchases."
  ],
  "recommendedAction": "Require human approval for markdowns above 15%...",
  "reasoning": "This pricing model treats percent markdown as a short-term unit lift tradeoff..."
}

This is a decision layer, not just a prompt wrapper.

  • The predict response includes expected value, tail risk, confidence, and a machine-readable `decision`.
  • The `predictionId` preserves what the system knew before execution.
  • Each `predictionId` can still be traced to customer, caller, API key, and outcome provenance internally.
  • Sync target: `1200ms` for normal pre-execution checks.
  • Recommended client timeout: `1500ms`.
  • Fallback policy: `ESCALATE` if Conseq is slow or unavailable.
  • The canonical public call is `predict`, even while older backend names remain in `v1` and GraphQL paths.
  • Confidence now blends feature coverage with recorded historical accuracy.
  • When `companyId` is present, same-company recorded outcomes can shift the baseline.
  • Repeat customers now get slice-specific calibration from similar recorded outcomes.
  • Recurring failure patterns can now be clustered and fed back into future predictions.
  • It explicitly returns second-order and third-order effects.
  • Each prediction can later be checked against actual outcomes.
  • Stripe checkout events can now feed outcome signals automatically.
  • The accuracy status is part of the product, not an afterthought.

Trace each prediction from request to verification.

  • Log `predictionId`, `decision`, caller, target, and submitted timestamp for every request.
  • Sync predict responses now expose `x-conseq-prediction-id`, `x-conseq-decision`, and `x-conseq-verification-status`.
  • Async `v1` responses expose `x-conseq-job-id` so polling can be traced back to the original request.
  • `/api/predictions` is the customer-facing trace view for later verification status and history.

How contract changes ship

The single-predict HTTP transport now leads with v2. Additive changes can ship within v2. Breaking changes require a new major version path and schema contract.

  • `POST /api/v2/predict` is now the canonical HTTP entry point.
  • `/api/v1` remains the stable pricing-specific contract during the transition.
  • `serviceStatus` still reflects the current GraphQL/AppSync contract version.
  • Breaking changes ship only behind a new major version.

Stable machine-readable error codes

  • `INVALID_INPUT`
  • `UNSUPPORTED_CAPABILITY`
  • `INSUFFICIENT_CREDITS`
  • `RATE_LIMIT_EXCEEDED`
  • `DAILY_QUOTA_EXCEEDED`
  • `IDEMPOTENCY_CONFLICT`
  • `NOT_FOUND`
  • `UNAUTHORIZED`
  • `INTERNAL_ERROR`

Predict returns one error envelope

{
  "error": {
    "code": "RATE_LIMIT_EXCEEDED",
    "message": "Rate limit exceeded for this customer account. Limit: 30 predictions per minute."
  }
}

Copy, paste, make the predict call.

export APP_URL="https://conseq.ai"
export CONSEQ_API_KEY="conseq_..."
curl -X POST "$APP_URL/api/v2/predict" \
  -H "content-type: application/json" \
  -H "authorization: Bearer $CONSEQ_API_KEY" \
  -d '{
    "capability": "PRICING",
    "actionType": "PERCENT_MARKDOWN",
    "companyId": "northstar-commerce",
    "actor": {
      "kind": "AI_AGENT",
      "id": "pricing-worker"
    },
    "target": {
      "type": "SKU",
      "id": "BUNDLE-15",
      "name": "Electronics Bundle"
    },
    "context": {
      "oldPrice": 100,
      "priceChangePercent": 0.15,
      "recentDailyRevenue": 4200,
      "recentDailyUnits": 42,
      "marginRate": 0.46
    }
  }'
if (prediction.decision !== "ALLOW") {
  return {
    execute: false,
    reason: prediction.decision,
  };
}

return {
  execute: true,
};
curl -X POST "$APP_URL/api/v1/outcomes" \
  -H "content-type: application/json" \
  -H "authorization: Bearer $CONSEQ_API_KEY" \
  -d '{
    "predictionId": "pred_123",
    "actualRevenueDelta": -1200,
    "actualUnitDelta": -12,
    "competitorMatchedWithin48h": true,
    "recommendationFollowed": false
  }'
import { createConseqClient } from "@conseq/sdk";

const client = createConseqClient({
  baseUrl: process.env.APP_URL!,
  apiKey: process.env.CONSEQ_API_KEY!,
  timeoutMs: 1500,
});

const prediction = await client.predict({
  capability: "PRICING",
  actionType: "PERCENT_MARKDOWN",
  actor: { kind: "AI_AGENT", id: "pricing-worker" },
  target: { type: "SKU", id: "BUNDLE-15" },
  context: {
    oldPrice: 100,
    priceChangePercent: 0.15,
    recentDailyRevenue: 4200,
  },
});

console.log(prediction.id, prediction.actionPolicy);
try {
  const prediction = await predict(input);

  if (prediction.decision !== "ALLOW") {
    return { execute: false, reason: prediction.decision };
  }

  return { execute: true };
} catch (error) {
  return {
    execute: false,
    reason: "ESCALATE",
    notes: "Conseq timed out or was unavailable. Keep the current price or escalate.",
  };
}

`ASYNC` remains on the current `v1` pricing contract while the `v2` request envelope is being standardized.

curl -X POST "$APP_URL/api/v1/predict" \
  -H "content-type: application/json" \
  -H "authorization: Bearer $CONSEQ_API_KEY" \
  -d '{
    "capability": "PRICING",
    "responseMode": "ASYNC",
    "actionType": "PERCENT_MARKDOWN",
    "companyId": "northstar-commerce",
    "actor": {
      "kind": "AI_AGENT",
      "id": "pricing-worker"
    },
    "target": {
      "type": "SKU",
      "id": "BUNDLE-15"
    },
    "context": {
      "oldPrice": 100,
      "priceChangePercent": 0.15,
      "recentDailyRevenue": 4200
    }
  }'
curl "$APP_URL/api/v1/jobs/job_123" \
  -H "authorization: Bearer $CONSEQ_API_KEY"

Start at the trust level you can actually ship.

Safest starting point. Call `predict`, log the result, and execute nothing.

const prediction = await predict(input);

console.log("shadowPrediction", prediction);
// Stop here. Do not execute anything yet.
Open runnable example

Next step. Use Conseq to score the action, but route every proposal into human approval before execution.

const prediction = await predict(input);

return {
  execute: false,
  approvalRequired: true,
  reason: prediction.decision,
};
Open runnable example

Most autonomous starting point. Auto-execute only when Conseq returns `ALLOW`.

const prediction = await predict(input);

if (prediction.decision !== "ALLOW") {
  return {
    execute: false,
    reason: prediction.decision,
  };
}

return executeAction();
Open runnable example

Push real Stripe outcomes back into Conseq.

  • Create a Stripe connector in `/api/connectors`.
  • Use the generated webhook URL as a Stripe endpoint.
  • Subscribe the endpoint to `checkout.session.completed`.
  • Add `conseq_prediction_id` metadata to the checkout session.
  • Optionally add `conseq_units` when one session covers multiple units.
metadata: {
  conseq_prediction_id: "pred_123",
  conseq_units: "2"
}

Conseq ignores unsupported event types, events without a prediction link, non-USD sessions, and duplicate Stripe event IDs.