APIGQL is a federation and policy platform first. The AI lives inside that fabric to speed up the hard work: it reads your OpenAPI files, proposes joins and SDL patches, and can turn natural language into GraphQL queries — all visible, reviewable, and overridable in the UI. You can turn AI off and still keep REST→GraphQL, federation, PDP, DSAR, and OTEL; every AI call runs downstream of PDP, so prompts and outputs never bypass policy.
AI helpers in APIGQL are just another consumer of the governed graph. Before an AI resolver runs,
the data plane calls your PDP with the same subject, resource, and context used for normal queries.
If the PDP obligations say features.ai = false, the AI resolver simply does not execute.
features.ai and a masked selection set.Tenants can have AI fully enabled, limited to certain graphs, or completely disabled — the underlying REST→GraphQL and federation continue to work either way.
# PDP evaluates whether AI is allowed for this query
curl -sS -X POST https://<cp-host>/pdp/decision.v2 \
-H "content-type: application/json" \
-d '{
"tenant":"t_demo",
"workspace":"ws_primary",
"action":"read",
"resource":{"type":"GraphQuery","name":"orders"},
"context":{
"role":"analyst",
"selection":["orders.id","orders.total","orders.userEmail"],
"client":"console",
"useAI":true
}
}' | jq
APIGQL uses allowFields, mask, and obligations.features.ai
to decide which fields exist in the AI plan and whether AI runs at all.
# 1) User types: "Show risky orders from last week."
# 2) Modeling helper proposes a GraphQL query plan.
# 3) Data plane calls PDP. Only if features.ai = true:
# - execute GraphQL selection
# - run AI summarizer on the masked result.
curl -sS https://<dp-host>/graphql \
-H "content-type: application/json" \
-H "x-tenant-id: t_demo" \
-H "x-workspace-id: ws_primary" \
-H "authorization: Bearer <jwt>" \
-d '{
"query": "query AskAIOverOrders($prompt:String!) {
askAI {
riskyOrdersSummary(prompt:$prompt) {
text
traceId
usedFields
}
}
}",
"variables": {
"prompt": "Summarize the riskiest orders from last week."
}
}' | jq
If PDP returns features.ai = false, askAI short-circuits with a
policy error and emits a span so you can see who asked and why it was denied.
Modeling helpers do three things:
In every case, they operate on the same shape and the same decisions as your regular GraphQL clients. No secret data paths, no “AI shadow API,” and no dependency on AI for the core federation and policy value.
AI outputs carry a traceId that ties them back to:
That means “what data fed this AI answer?” goes from hand-wavy to one search in your tracing backend.
# Find the AI span by traceId
# (traceId is returned in askAI.summary.traceId)
# Example Jaeger / OpenTelemetry search
traceId="4f2b9c1d7a3e4f12"
open "https://<jaeger-host>/search?traceID=$traceId"Because AI is in-path, not sidecar, DSAR and policy logs already include the context for what each AI call saw and why it was allowed.