Skip to main content
Available starting with FlowX.AI 5.7.0AI provider configuration allows organizations to manage their own LLM providers and model assignments.

Overview

FlowX supports configuring AI providers at three levels:
  • Organization level: Add and manage AI providers, set API keys, whitelist specific models
  • Organization level (per workspace type): Assign default and fallback models for each AI capability, scoped by workspace type (Sandbox, Staging, Production)
  • Node level: Override the default model on individual workflow nodes
Organization admins define which providers and models are available, and configure model assignments per workspace type. Workflow designers can optionally override the model on individual AI nodes.

Organization-level: Model providers

Access model providers from Organization SettingsAI SettingsModel Providers.
Model Providers page in Organization Settings
An OpenAI provider card is present by default, using a FlowX-managed key. You can switch it to your own key or add additional providers.

Adding a provider

1

Navigate to Model Providers

In FlowX Designer, go to your organization settings and select Model Providers.
2

Add a provider

Click the + button to open the Provider Details dialog. Configure:
FieldDescription
ProviderSelect from the provider catalog (e.g., OpenAI)
Base URLAPI endpoint (required). OpenAI offers presets (Global, EU, US) or select Custom URL to enter any OpenAI-compatible endpoint.
API Key SourceSelect Own to provide your own API key, or FlowX to use the FlowX-provided API key
KeyYour provider API key (required, shown when Own is selected)
Organization IDProvider organization ID (optional)
You cannot add the same provider type twice. The system validates against the provider catalog and shows an error if a duplicate is attempted.
3

Test the connection

Click Test Connection to validate your credentials against the provider API.
  • Success: “Connection successful. X models discovered.”
  • Failure: Error details are shown (e.g., invalid API key for 401, insufficient permissions for 403).
You cannot save the provider configuration until the connection test succeeds.
4

Enable models

After saving, click Manage Models to open the model whitelisting modal. Enable specific models from the discovered list. Only enabled models appear in workspace-level assignments.

API Key Source

The API Key Source radio buttons control how the provider authenticates:
OptionDescription
OwnProvide your own API Key. You have full control over the key, Organization ID, and which models are enabled.
FlowXUse the FlowX-provided API key. The Key and Organization ID fields are hidden and cannot be edited.
When editing the OpenAI provider, switch between Own and FlowX under API Key Source.

Model whitelisting

The model whitelisting modal (opened from Manage Models) lets you control which discovered models are available downstream. Models are grouped by capability:
CategoryDescriptionExample models
Text GenerationText responses, summaries, translations, extractiongpt-4o, gpt-4o-mini, gpt-4-turbo
Code GenerationAutomated code generation for Developer Agentsgpt-4o, gpt-4o-mini
EmbeddingsVector representations for knowledge base indexing and RAGtext-embedding-3-large, text-embedding-3-small
Speech to TextAudio transcription for the Speech to Text workflow nodewhisper-1
Text to SpeechAudio generation for the Speech to Text workflow node (TTS operation)tts-1-hd
Each model row shows: model name, capability badges, context window (e.g., “128K tokens”), and indicative pricing (e.g., “5/5 / 15 per 1M tokens”).
  • Use Search to filter models by name
  • Use Select All / Deselect All for bulk actions
  • A counter at the bottom shows “X of Y models enabled”
  • Click Save Whitelist to persist your selection
Disabling a previously enabled model shows a warning listing where that model is currently used (workspace defaults, fallback assignments). Disabling it causes affected configurations to fall back to their fallback model.
Model whitelisting is not available when using the FlowX-managed key for OpenAI.

Editing a provider

When editing an existing provider (API key or Organization ID changed):
  1. The Save button is disabled after any field edit
  2. Click Test Connection to validate the new credentials
  3. On success, Save becomes enabled
  4. If the newly discovered model list differs from the previous one:
    • New models are added to the whitelist automatically
    • If any previously enabled models are no longer available, a replacement wizard guides you through selecting alternatives
    • Skipping the wizard removes the missing models from all dropdowns where they were used, and the fallback mechanism activates

Deleting a provider

All providers except the default OpenAI (FlowX-managed) can be deleted. Deleting a provider:
  1. Shows a confirmation with the impact: nodes and knowledge bases using this provider’s models switch to their fallback model
  2. If the fallback model is also from the deleted provider, a warning indicates that all LLM-related activities using those models will fail

Provider card

Each provider card shows:
  • Provider name and icon
  • Masked API key (last 4 characters visible, e.g., ••••••••••••1234)
  • Model count: “Models enabled: X / Y”
  • Organization ID (if configured)
  • Connection status
  • Available actions: Manage Models, Edit, Delete

Defaults & Fallbacks

Access model assignments from Organization SettingsAI SettingsDefaults & Fallbacks.
Defaults & Fallbacks configuration with workspace-type tabs
Organization admins assign a Default Model and an optional Fallback Model for each AI capability, scoped by workspace type. Use the Sandbox, Staging, and Production tabs to configure different model assignments per environment. Select Use provider default to inherit the catalog default for that capability.
CapabilityUsed for
Text GenerationText operations (summarization, translation, extraction), data operations (enrichment, generation, transformation), custom agents, and intent classification
Image UnderstandingAI image nodes — image description and image analysis. Only models with vision capability are shown.
EmbeddingsKnowledge base indexing and context retrieval (RAG). Used by context retrieval nodes in workflows.
Document / OCRAI document nodes — document generation, extraction, understanding, and text extraction from documents.
Developer AgentsAI planner, developer, analyst, and designer agents. Used for automated code generation and analysis workflows. For best performance, this capability uses the FlowX tested and optimized model configured at the organization level.
Default models from the OpenAI FlowX key are pre-assigned for each capability.

Assigning models

For each capability:
  1. Select a Default model from org-whitelisted models matching that capability, or use the provider default
  2. Optionally select a Fallback model, used if the default model is unavailable or rate-limited
Assignments are configured per workspace type (Sandbox, Staging, Production). All workspaces of a given type inherit the same model assignments.
If the default and fallback models use the same provider, a warning is displayed. Using different providers for default and fallback improves resilience against provider outages.
When Use provider default is selected, the assignment inherits the catalog default model. Changes to the catalog default automatically propagate.

Fallback behavior

When a primary model is unavailable, rate-limited, or disabled at the organization level, the system automatically switches to the configured fallback model. The resolution order is:
  1. Node-level override (if configured) → its fallback model
  2. Workspace default → its fallback model
  3. Error if all options exhausted
ScenarioBehavior
Primary model unavailableFallback model is used automatically
Node override model not foundFalls back to workspace default model
API key expired or revokedNodes using this provider’s models fail with “API key invalid” in the execution log. Provider shows a warning at the organization level.
Both primary and fallback unavailableNode fails with error: “Primary and fallback models unavailable. Contact your organization admin.”

Per-node model override

Workflow designers can override the default model on individual AI nodes (Custom Agent, Intent Classification, etc.) in the Integration Designer. This allows different nodes in the same workflow to use different models. For example, a fast model for intent classification and a more capable model for complex reasoning.
Per-node model override in Integration Designer workflow

How it works

When configuring an AI node, an optional Model Override section lets you select a specific model:
FieldDescription
Provider typeThe provider to use (e.g., OPENAI)
ModelThe specific model (e.g., gpt-4o)
Fallback provider typeOptional fallback provider if the primary is unavailable
Fallback modelOptional fallback model
The override uses portable identifiers (providerType + modelIdStr) that resolve at runtime against the organization’s available models. Only models enabled in the organization’s whitelist can be selected. When no override is set, the node uses the workspace default model for its capability (text generation, embeddings, etc.).
Per-node model overrides apply to the text generation capability only. Other capabilities (image understanding, embeddings, document/OCR) always use the workspace defaults.

Supported providers

FlowX maintains a provider catalog with pre-configured provider types. Each provider type includes display metadata (name, icon) and optional base URL presets.
ProviderKey modeNotes
OpenAIFlowX Managed or BYOKEU, US, and Global base URL presets

Permissions

PermissionDescription
org_ai_providers_readView AI providers and model assignments at organization level
org_ai_providers_writeAdd, edit, delete providers, manage model whitelists, and configure Defaults & Fallbacks

AI in FlowX

Overview of AI capabilities in the FlowX platform

Agent Builder

Build AI agents that use the configured models

Conversational Workflows

Create chat experiences powered by AI models

Organization Settings

Manage organization-level settings
Last modified on April 9, 2026