AI Knowledge Base

--> to the BOTwiki - The Chatbot Wiki

An AI knowledge base is the structured repository of information from which an AI agent draws its responses. Unlike the training data of a Large Language Model (LLM), the knowledge base is company-specific, up-to-date, and versionable. It contains product manuals, websites, FAQs, process descriptions, pricing plans, terms and conditions, service guides, and everything the agent needs to know reliably and accurately when interacting with customers.

The knowledge base is thus the counterpart to "creative model intuition." While the LLM contributes language understanding and response generation, the knowledge base ensures factual accuracy. In combination with RAG (Retrieval Augmented Generation) , this creates a system that responds naturally while remaining brand-safe and compliant.

 

Building an AI Knowledge Base

A knowledge base that can be used effectively isn’t created by simply dumping all available documents into a vector database. Three steps are standard in BOTfriends projects.

Once the team has decided which documents, wikis, CMS content, FAQs, and backend data are reliable and necessary for the bot, all knowledge sources are uploaded to the knowledge base.

The platform breaks down the uploaded content into semantically meaningful units (so-called text chunks). The chunks are transferred to a vector space via embeddings so that they can be found later.

Tip: The better the content is structured and formatted (e.g., using Markdown), the more accurate the bot's information will be and the higher the quality of its responses.

If you do your research thoroughly, choose your sources carefully, and keep them up to date, you’ll lay the groundwork for consistent answer quality. At BOTfriends, we’re happy to help you with this process.

 

Knowledge Base and Multi-Agent Orchestration

In single-prompt architectures, the entire knowledge base—or an overly large portion of it—is often included in every prompt. This leads to context contamination, higher costs, and poorer response quality. BOTfriends, on the other hand, works with dedicated AI agents within a multi-agent orchestration framework. They have access only to the parts of the knowledge base that they need for their specific tasks. 

 

Knowledge Base and RAG

The technical mechanism that connects the knowledge base and the AI model is called Retrieval-Augmented Generation—RAG for short. Instead of having the language model generate a response based on static knowledge, the knowledge base is first searched for every user query. The text chunks that are most semantically relevant are identified and provided to the model as context—only then does it generate a response.

An additional fact check compares the generated response with the user's query once more before it is displayed. 

RAG thus provides the foundation that enables a bot to deliver accurate, source-based answers rather than making things up or repeating outdated information.

 

Frequently Asked Questions (FAQ)

Ideally, on an ongoing basis. When it comes to pricing plans, terms and conditions, or product data, “once a quarter” is rarely enough. BOTfriends X supports automated sync workflows from CMS, DAM systems, and backend data sources, ensuring that updates are automatically reflected in the knowledge base without any manual effort.

By having the AI agent use only the verified sources contained therein to generate responses. A fact-checking layer further ensures that, in cases of uncertainty, the model communicates transparently rather than speculating.

Yes. In BOTfriends projects, multiple knowledge bases are created in parallel to establish clear thematic boundaries. Using routing logic in the multi-agent orchestration, each agent accesses the knowledge base that is appropriate for it.

–> Back to BOTwiki - The Chatbot Wiki