Model call
Useful for one-off reasoning, but it does not automatically give you a stable branch point, traceability, or measured follow-through.
API docs
Conseq is the decision API for AI workers. Start with one public prediction call: `POST /api/v2/predict`. Branch on the returned `decision`, execute only when your policy allows it, and keep traceability, fallback behavior, and outcome verification behind the same contract.
Credit model: one pricing prediction currently spends one API credit. Outcome recording and prediction reads do not spend credits. The commercial model stays tied to prediction usage while Conseq is still a prediction-first product.
Conseq does not claim completed-action pricing yet. That model only becomes honest if the product later executes or directly governs those actions in production.
Canonical prediction transport: `v2`. Stable pricing contract: `v1`.
Why not just call a model?
Useful for one-off reasoning, but it does not automatically give you a stable branch point, traceability, or measured follow-through.
Returns a machine-readable `decision`, stores a prediction record, and keeps fallback behavior and verification tied to the same workflow step.
The value is not one clever answer. It is stopping a bad policy from repeating across high-volume actions while keeping evidence of what the system knew at the time.
Latency
If Conseq is slow or unavailable, do not auto-approve the pricing action. Escalate it to human review or keep the current price in place.
These values are also exposed by `serviceStatus` and the REST response headers for `POST /api/v2/predict`.
Fallback playbook
Public mental model
Confidence and tail risk are first-class outputs. Do not let polished reasoning text override a low-confidence prediction or a `WARN` or `ESCALATE` decision.
Variables
{
"capability": "PRICING",
"actionType": "PERCENT_MARKDOWN",
"companyId": "northstar-commerce",
"actor": {
"kind": "AI_AGENT",
"id": "pricing-worker",
"label": "Production pricing worker"
},
"target": {
"type": "SKU",
"id": "BUNDLE-15",
"name": "Electronics Bundle"
},
"context": {
"category": "electronics",
"oldPrice": 100,
"priceChangePercent": 15,
"promoDurationDays": 7,
"inventoryPressure": 0.7,
"seasonalityIndex": 1.15,
"priceElasticityHint": 0.68,
"recentCompetitorBehavior": "MATCHING_DISCOUNTS",
"promoAlreadyActive": false,
"recentDailyRevenue": 4200,
"recentDailyUnits": 42,
"marginRate": 0.46,
"competitorVolatility": 0.72,
"currentConversionRate": 0.034
}
}GraphQL compatibility
The public product language is `predict`, but the AppSync contract still exposes `predictConsequence` while the HTTP `v2` surface is being generalized.
mutation PredictConsequence($input: ConsequencePredictionInput!) {
predictConsequence(input: $input) {
prediction {
id
capability
modelVersion
reasoningVersion
actionType
customerAccountId
customerAccountName
requestSource
requestCallerId
requestCallerEmail
requestApiKeyId
requestApiKeyLabel
expectedValue
expectedValueLabel
tailRisk
confidence
actionPolicy
immediateOutcome
secondOrderEffects
thirdOrderEffects
recommendation
reasoning
outcomeStatus
accuracyStatus
outcomeSource
outcomeActorId
outcomeActorLabel
}
job {
id
status
pollAfterMs
predictionId
}
error {
code
message
}
}
}Async jobs
query GetConsequenceJob($id: ID!) {
getConsequenceJob(id: $id) {
id
status
pollAfterMs
predictionId
lastError
}
}Mode choice
Benchmark policy
The pricing capability is now benchmarked against held-out cases with a frozen frontier-prompt baseline. The point is not to claim generic model superiority. The point is to show that Conseq makes better pricing decisions against realized outcomes.
Behind The Scenes
Customers should be able to understand Conseq as one predict call. Outcome connectors, pattern review, evaluation, and changelog details still exist, but they support model quality rather than define the primary integration story.
Why This Matters
`v2` returns a flat decision envelope instead of leaking the internal prediction/job storage shape.
{
"predictionId": "pred_123",
"decision": "ESCALATE",
"expectedValue": -18400,
"tailRisk": 0.36,
"confidence": 0.77,
"immediateOutcome": "$8.7K of short-term revenue lift if the markdown increases unit demand.",
"secondOrderEffects": [
"Competitors are more likely to match the lower price and compress category margin."
],
"thirdOrderEffects": [
"Customers begin anchoring on the lower price and delay future full-price purchases."
],
"recommendedAction": "Require human approval for markdowns above 15%...",
"reasoning": "This pricing model treats percent markdown as a short-term unit lift tradeoff..."
}Why the contract matters
Request observability
Versioning
The single-predict HTTP transport now leads with v2. Additive changes can ship within v2. Breaking changes require a new major version path and schema contract.
Error contract
HTTP error response
{
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "Rate limit exceeded for this customer account. Limit: 30 predictions per minute."
}
}Quickstarts
1. API key auth
export APP_URL="https://conseq.ai"
export CONSEQ_API_KEY="conseq_..."2. Run a prediction
curl -X POST "$APP_URL/api/v2/predict" \
-H "content-type: application/json" \
-H "authorization: Bearer $CONSEQ_API_KEY" \
-d '{
"capability": "PRICING",
"actionType": "PERCENT_MARKDOWN",
"companyId": "northstar-commerce",
"actor": {
"kind": "AI_AGENT",
"id": "pricing-worker"
},
"target": {
"type": "SKU",
"id": "BUNDLE-15",
"name": "Electronics Bundle"
},
"context": {
"oldPrice": 100,
"priceChangePercent": 0.15,
"recentDailyRevenue": 4200,
"recentDailyUnits": 42,
"marginRate": 0.46
}
}'3. Branch on `decision`
if (prediction.decision !== "ALLOW") {
return {
execute: false,
reason: prediction.decision,
};
}
return {
execute: true,
};4. Record the outcome
curl -X POST "$APP_URL/api/v1/outcomes" \
-H "content-type: application/json" \
-H "authorization: Bearer $CONSEQ_API_KEY" \
-d '{
"predictionId": "pred_123",
"actualRevenueDelta": -1200,
"actualUnitDelta": -12,
"competitorMatchedWithin48h": true,
"recommendationFollowed": false
}'5. JS/TS SDK
import { createConseqClient } from "@conseq/sdk";
const client = createConseqClient({
baseUrl: process.env.APP_URL!,
apiKey: process.env.CONSEQ_API_KEY!,
timeoutMs: 1500,
});
const prediction = await client.predict({
capability: "PRICING",
actionType: "PERCENT_MARKDOWN",
actor: { kind: "AI_AGENT", id: "pricing-worker" },
target: { type: "SKU", id: "BUNDLE-15" },
context: {
oldPrice: 100,
priceChangePercent: 0.15,
recentDailyRevenue: 4200,
},
});
console.log(prediction.id, prediction.actionPolicy);6. Fallback policy
try {
const prediction = await predict(input);
if (prediction.decision !== "ALLOW") {
return { execute: false, reason: prediction.decision };
}
return { execute: true };
} catch (error) {
return {
execute: false,
reason: "ESCALATE",
notes: "Conseq timed out or was unavailable. Keep the current price or escalate.",
};
}7. Queue async job
`ASYNC` remains on the current `v1` pricing contract while the `v2` request envelope is being standardized.
curl -X POST "$APP_URL/api/v1/predict" \
-H "content-type: application/json" \
-H "authorization: Bearer $CONSEQ_API_KEY" \
-d '{
"capability": "PRICING",
"responseMode": "ASYNC",
"actionType": "PERCENT_MARKDOWN",
"companyId": "northstar-commerce",
"actor": {
"kind": "AI_AGENT",
"id": "pricing-worker"
},
"target": {
"type": "SKU",
"id": "BUNDLE-15"
},
"context": {
"oldPrice": 100,
"priceChangePercent": 0.15,
"recentDailyRevenue": 4200
}
}'8. Poll async job
curl "$APP_URL/api/v1/jobs/job_123" \
-H "authorization: Bearer $CONSEQ_API_KEY"Integration maturity
Shadow mode
Safest starting point. Call `predict`, log the result, and execute nothing.
const prediction = await predict(input);
console.log("shadowPrediction", prediction);
// Stop here. Do not execute anything yet.Open runnable exampleApproval-required
Next step. Use Conseq to score the action, but route every proposal into human approval before execution.
const prediction = await predict(input);
return {
execute: false,
approvalRequired: true,
reason: prediction.decision,
};Open runnable exampleLow-risk autopass
Most autonomous starting point. Auto-execute only when Conseq returns `ALLOW`.
const prediction = await predict(input);
if (prediction.decision !== "ALLOW") {
return {
execute: false,
reason: prediction.decision,
};
}
return executeAction();Open runnable exampleWebhook flow
Webhook metadata
metadata: {
conseq_prediction_id: "pred_123",
conseq_units: "2"
}Conseq ignores unsupported event types, events without a prediction link, non-USD sessions, and duplicate Stripe event IDs.