APIGQL
Your REST · Our Graph
FAQ • What teams usually ask

Frequently asked questions

A quick cut of the questions we hear from security, platform, and app teams when they first look at APIGQL. Think of this as the “pre-RFI” sheet: enough detail to see if we fit your architecture, without an eighty-page PDF.

Architecture & deployment

Is APIGQL a gateway, a graph server, or a control plane?

All three—on purpose. The Control Plane ingests OpenAPI, manages tenants, workspaces, and policies. The Data Plane is the GraphQL runtime that calls your REST services and enforces PDP decisions. Deployed together, they behave like a governed gateway with federation-ready SDL output.

Can we run CP and DP separately?

Yes. CP and DP are built to run as separate services. Many teams host CP in a more controlled “platform” VPC and deploy DPs closer to apps or regions, while still keeping a single source of truth for SDL, policy, and DSAR.

Do you support Apollo Federation?

Yes. APIGQL can emit Apollo-compatible subgraphs from your OpenAPI specs. You can start with a single, unified graph, then split into subgraphs later without re-implementing your joins or policies.

How does this coexist with our existing API gateways?

APIGQL usually sits behind your edge gateway. The edge handles TLS termination and coarse routing; APIGQL handles fine-grained joins, PDP/PEP, DSAR, and telemetry on the inside.

Legacy & “messy” APIs

Our REST/JSON is inconsistent. Does that leak into the graph?

No. APIGQL uses an explicit mapping layer between your REST/OpenAPI and the GraphQL schema. That layer normalizes field names (for example, USER_ID, userId, and user_id can all become userId in the graph), reshapes nested payloads, and lets you hide the “warts” of legacy responses from client developers.

How do you handle different pagination styles?

The mapping layer captures how each upstream paginates—offset/limit, page/size, cursors, or “homegrown” approaches—and projects a consistent pattern into GraphQL. Clients see a single, predictable connection style; APIGQL does the work of translating arguments to whatever the backend expects.

Do we have to clean or rewrite our legacy APIs first?

No. The goal is to modernize at the graph layer without forcing immediate backend rewrites. You can register existing OpenAPI specs (or wrappers around SOAP/older services), define mappings, and give frontend and partner teams a modern graph while backend cleanup happens on its own timeline.

Can you sit in front of SOAP or older XML services?

Yes, via a thin adapter. Many teams front SOAP/XML endpoints with a small REST facade that exposes an OpenAPI spec. APIGQL then treats that spec like any other source, applies mappings and policies, and hides the SOAP details from consumers.

Security, policy, and DSAR

Do we have to rewrite our auth?

No. APIGQL plugs into your existing IdPs (OIDC/SAML) and reads claims/roles/scopes from the tokens you already issue. You express ABAC rules in the PDP; the DP enforces them at field level.

How do you handle PII and masking?

Policies can mark fields as mask, drop, or allow. At runtime, the DP applies these decisions per resolver and stamps the outcome into OTEL spans (apigql.pdp.allow, apigql.pdp.mask, etc.).

What about DSAR (export/delete) requests?

DSAR flows live in the Control Plane: create, track, fulfill. Each request has an immutable audit trail and, for exports, a downloadable artifact (for example, export.zip) you can return to the subject.

Can we prove which policy was in effect at a given time?

Yes. Every DP call includes planId, sdlEtag, and policyEtag in traces, so you can reconstruct exactly which SDL and policy version produced a response.

AI helpers & observability

Where does AI run in the request path?

Always downstream of PDP/PEP. APIGQL first enforces policy (including masking and DSAR obligations), then hands the sanitized response to AI helpers for summarization, clustering, or explanation.

Can AI see raw PII?

No, unless your policies explicitly allow it. AI helpers see the same shaped data your clients see, and we stamp which redactions were applied into the trace for audit.

What telemetry do you emit?

APIGQL emits OpenTelemetry traces with spans for GraphQL operations, REST calls, PDP decisions, and DSAR actions. You can search Jaeger by tags like apigql.pdp.allow or apigql.dsar.action.

Where do we send logs and metrics?

You configure OTLP/OTEL exporters to point at your preferred backends (Jaeger, Tempo, Honeycomb, etc.). APIGQL doesn’t dictate your observability stack; it just makes sure the right signals are emitted.

Commercial & rollout

Can we start with a single team or use case?

Yes. Many customers begin with a single regulated use case (for example, “orders with PII”) and then grow into more tenants and workspaces as they get comfortable with PDP and DSAR flows.

How invasive is the rollout?

Backends don’t need to change. You register their OpenAPI specs, wire auth and PDP, and route a slice of traffic through APIGQL. From there you can gradually expand coverage while your existing gateway and services stay put.

Do you support hybrid and on-prem?

Yes. CP/DP can be deployed into your cloud accounts, on-prem clusters, or a mix (for example, CP in cloud, DPs close to on-prem systems) so data stays where it belongs.

What if we want to leave APIGQL later?

The graph schema, mappings, and policies are all explicit artifacts (SDL and config), not hidden inside a black box. You can export the generated SDL and mapping configuration and use them as inputs to a different graph stack if you decide to move on. APIGQL aims to be a fast path to a governed graph, not a trap.

What’s the best way to evaluate APIGQL?

Point us at two or three heterogeneous REST services (different auth, different data), and we’ll federate them into a single graph with PDP, DSAR, and OTEL wiring. That usually reveals the value in a few hours.