Extending with a custom AI provider
This is a developer reference for adding a new AI provider adapter (text model, image model, or both) to Structura’s AI engine. If you’re just trying to pick a provider in wp-admin, see AI settings.
Structura’s AI engine uses an adapter pattern: the pipeline (research → draft → image) talks to a small set of provider- agnostic interfaces, and each concrete provider — OpenAI, Gemini, Anthropic, Cloud — implements them. Adding a new provider means writing an adapter that satisfies those interfaces and registering it with the engine.
Source of truth: functions/src/ai/engine.ts,
specs/ai-provider-refactor-plan.md.
What you’ll write
For a text provider:
- A class implementing
TextProviderAdapter, with:id,displayName,supportedModels.generate(request): Promise<GenerationResult>— the core call.estimateCost(request): Promise<CostEstimate>— used for Cloud quota tracking and the post-generation cost line on the Logs page.validateKey(key)— used by Structura → Settings → AI Engine to confirm a pasted key is valid before saving.
For an image provider:
- A class implementing
ImageProviderAdapter, with:id,displayName,supportedSizes.generateImage(request): Promise<ImageResult>.validateKey(key).
Most adapters end up being 150–300 lines each. Look at the existing
OpenAIAdapter, GeminiAdapter, and AnthropicAdapter for the
shape.
Registering the adapter
- Add the adapter to
functions/src/ai/providers/index.tsexport list. - Add it to the engine’s provider registry so it’s visible to the pipeline.
- Add a corresponding entry in the shared types package
(
packages/types) so the client’s AI Engine settings page can render the right form fields. - Surface any provider-specific config (e.g., a base URL override for self-hosted models) in the settings form.
Testing
The adapter pattern makes adapters eminently testable:
- Unit-test your adapter against recorded fixtures — no real API calls in CI.
- Add integration tests against a dev key if possible; gate them behind an env variable so CI without keys can skip.
See functions/src/ai/__tests__/ for the existing test patterns.
What the engine hands to your adapter
A GenerationRequest looks like:
prompt— the assembled system + user prompts for this stage of the pipeline.model— the caller’s preferred model (your adapter can veto).temperature,maxTokens, etc.context— structured metadata (campaign ID, stage name) for your logs, never fed to the model.
Your adapter returns a GenerationResult with:
content— the model’s output.usage— tokens (or equivalent) consumed.costUsd— your calculated cost, so the engine can log and charge accurately.
Error handling
Adapters throw typed errors:
AIAuthError— bad key, revoked credential.AIRateLimitError— transient throttle.AIQuotaError— out of quota / billing issue.AIProviderError— catch-all for other 4xx/5xx.
The engine uses the type to decide whether to retry, fail the run, or surface a specific message on the Logs page.
Ship it
Once your adapter is in and tested:
- Cut a
feat:commit. The changelog will name the provider automatically. - Update AI settings with the new provider’s setup steps (you’re writing for end users at that point, not developers).