Model Provider

Set up AI model providers to power your workflow agents.

Before creating workflows, you need to configure at least one AI model provider. This creates a reusable configuration that can be shared across all your workflows.

Supported Providers

ProviderModelsUse Case
OpenAIGPT-4o, GPT-4, GPT-3.5General purpose, code generation
AnthropicClaude 3.5 Sonnet, Claude 3 OpusComplex reasoning, long context
Google GeminiGemini Pro, Gemini UltraMultimodal, fast responses
Amazon BedrockClaude, Llama, TitanEnterprise, AWS integration
OllamaLlama, Mistral, customSelf-hosted, privacy-focused
  1. Open the sidebar menu
  2. Click Configurations
  3. You'll see the Component Library page

Note:

The Component Library stores reusable configurations for models, tools, and other components.

Create a New Configuration

  1. Click the Create New tab
  2. Fill in the basic details:
FieldDescription
NameDescriptive name (e.g., "OpenAI GPT-4o Production")
DescriptionPurpose of this configuration
  1. Click Save to create the configuration shell

Create New configuration form Fig : Model Configuration Component

Configure Provider Settings

After creating, click Edit Configuration to set up the provider:

Select Provider Tab

Choose your provider: OpenAI, Anthropic, Google Gemini, Amazon Bedrock, or Ollama

Provider Settings

SettingDescriptionExample
API KeyYour provider's API keysk-...
Model NameSelect from dropdowngpt-4o, claude-3-sonnet
TemperatureResponse randomness (0-1)0.7
System MessageDefault instructions"You are a helpful assistant"
StreamEnable streaming responsesToggle on/off
Advanced Settings
SettingDescription
Max TokensMaximum response length
Top PNucleus sampling parameter
Frequency PenaltyReduce repetition
Presence PenaltyEncourage new topics

Note:

Important: To use models from Configurable Components in workflows, you must have both Stream and Configure toggles enabled.

Model Provider Configurable Component Fig : Model Provide Configuration ( API & Settings )

Using Your Configuration

Once saved, your model configuration appears in:

  • Agent nodes in the workflow builder
  • Component selection dropdowns
  • My Configurations tab for management

In Workflow Builder

  1. Add an Agent node to your workflow
  2. In agent settings, you will see Connect Provider (config)
  3. From the Configured Components Identify the Configurable Model Provider Component with name (configured must be toggled on in the component)
  4. Drag the Actual model to canvas/editor
  5. Now connect the output of Model Node ( 🟢 ) to Connect Provider in Agent Node (🔵)
  6. Once Connected you will see an animated dotted line.
  7. Yay, you have added LLM model to your agent

Model Provider Configurable Component In WorkFlow Fig : Configurable Model Provider with Agent

Best Practices

  • Naming convention: Include provider, model, and environment (e.g., "Anthropic Claude-3 Dev")
  • Separate configs: Create different configs for dev/staging/production
  • Temperature tuning: Use lower values (0-0.3) for factual tasks, higher (0.7-1) for creative
  • API key security: Keys are encrypted; never share configurations publicly

Troubleshooting

IssueSolution
"Invalid API key"Verify key is correct and has proper permissions
"Model not available"Check your provider plan supports the selected model
"Rate limit exceeded"Upgrade plan or add request throttling

Next: Workflow