Knowledge Base
Store your documents and enable AI to answer questions using your own data with RAG.
A Knowledge Base stores your documents and makes them searchable by AI. This enables your workflows to answer questions using your actual data instead of just the AI's training knowledge.
How It Works
- Upload — Add your documents (PDF, TXT, Markdown, HTML)
- Process — Documents are split into chunks and converted to searchable embeddings
- Query — When a user asks a question, relevant chunks are retrieved
- Generate — The AI uses those chunks as context to generate accurate answers
This process is called Retrieval-Augmented Generation (RAG).
Benefits
| Benefit | Description |
|---|---|
| Grounded Answers | AI responses are based on your actual data |
| Reduced Hallucination | Less chance of incorrect or made-up information |
| Data Control | Sensitive data stays within your infrastructure |
| Always Current | Update documents to keep AI knowledge fresh |
Supported File Types
- PDF documents
- Plain text (.txt)
- Markdown (.md)
- HTML files
Key Features
- Chunking Strategies — Configure how documents are split for optimal retrieval
- Multiple Embedding Models — Choose from OpenAI, AWS Bedrock, or other providers
- Metadata Filtering — Filter results by document attributes
- Semantic Search — Find relevant content based on meaning, not just keywords
Using Knowledge Base in Workflows
Add a Knowledge Base Retrieval node to your workflow to query your documents. Connect it to an LLM node to generate answers with retrieved context.
[User Question] → [KB Retrieval] → [LLM with Context] → [Answer]
Setup Guide: Configure Knowledge Base