Knowledge Base

Store your documents and enable AI to answer questions using your own data with RAG.

A Knowledge Base stores your documents and makes them searchable by AI. This enables your workflows to answer questions using your actual data instead of just the AI's training knowledge.

How It Works

  1. Upload — Add your documents (PDF, TXT, Markdown, HTML)
  2. Process — Documents are split into chunks and converted to searchable embeddings
  3. Query — When a user asks a question, relevant chunks are retrieved
  4. Generate — The AI uses those chunks as context to generate accurate answers

This process is called Retrieval-Augmented Generation (RAG).

Benefits

BenefitDescription
Grounded AnswersAI responses are based on your actual data
Reduced HallucinationLess chance of incorrect or made-up information
Data ControlSensitive data stays within your infrastructure
Always CurrentUpdate documents to keep AI knowledge fresh

Supported File Types

  • PDF documents
  • Plain text (.txt)
  • Markdown (.md)
  • HTML files

Key Features

  • Chunking Strategies — Configure how documents are split for optimal retrieval
  • Multiple Embedding Models — Choose from OpenAI, AWS Bedrock, or other providers
  • Metadata Filtering — Filter results by document attributes
  • Semantic Search — Find relevant content based on meaning, not just keywords

Using Knowledge Base in Workflows

Add a Knowledge Base Retrieval node to your workflow to query your documents. Connect it to an LLM node to generate answers with retrieved context.

[User Question] → [KB Retrieval] → [LLM with Context] → [Answer]

Setup Guide: Configure Knowledge Base