Agents & Assets
Overview
By default, all agents within a project can access:
• |
Knowledge Bases |
• |
Integrations |
• |
Interactive Content |
Knowledge Bases
Knowledge bases are shared repositories of information that Relational Agents use to ground their responses in reliable facts. They function as structured content libraries within each project, enabling agents to answer questions accurately, perform complex reasoning, and remain consistent across conversations.
How Knowledge Bases Work in PromethistAI
Knowledge bases are project-level assets:
-
Every agent in a project inherits access to the same set of knowledge bases.
-
Updates or new uploads become immediately available to all agents in that project.
-
Knowledge bases are stored and parsed centrally, so administrators maintain one canonical version.
-
Renaming: You can change a knowledge base’s display name at any time. This does not affect its content or project-level availability. If an agent references a knowledge base (e.g.,
@old_name), it will be updated with the new reference@new_name.
When an agent receives a query, it combines relational intelligence with structured retrieval from the knowledge base. This allows it to adaptively pull facts, mirror user context, and deliver an answer that is both empathetic and precise.
If multiple knowledge bases are available, the agent will attempt to select the most relevant source. For best results, you can instruct the agent explicitly to use a particular knowledge base for specific topics.
Supported Formats
PromethistAI supports a wide range of enterprise file formats:
Text and Documents |
PDF, TXT, DOC, DOCX, HTML |
Data Files |
CSV, XLSX |
Presentations |
PPTX |
Images |
JPG, JPEG, PNG, TIFF |
When uploaded, each file is parsed so its contents are searchable in conversation.
Use Cases
Knowledge bases are designed to serve diverse business needs:
-
Customer Support: FAQ repositories, policy handbooks, escalation flows.
-
Sales and Retail: Product catalogs, promotional guides, price lists.
-
Technical Support: Troubleshooting manuals, diagnostic checklists, configuration notes.
-
Onboarding and Training: HR guidelines, compliance materials, training decks.
Best Practices
To get the most from knowledge bases:
-
Be explicit: Instruct an agent which knowledge base to consult for a given scenario. Example: "When answering questions about Company X, always use the Company_data knowledge base."
-
Keep content structured: Use clear headings, consistent formatting, and concise entries. Parsed documents with logical sections produce better retrieval quality.
-
Segment by purpose: Create separate knowledge bases for distinct domains (e.g. Support_FAQ, Retail_Catalog, Compliance_Guides).
-
Update regularly: Outdated documents stay live until removed. Review knowledge bases periodically to ensure accuracy.
Managing Knowledge Bases
Uploading
To add a new knowledge base:
-
Click Knowledge Bases in the left sidebar.
-
Select + Create in the top right corner.
-
In the Add Knowledge window, choose how to import content:
-
File Upload – drag and drop a file, or click Select File to browse.
-
Import URL – provide a web address to ingest webpage content.
-
Once uploaded, the file is parsed automatically and made available to all agents in the project.
Viewing
Knowledge bases are accessible from the Knowledge Bases section in the left sidebar of the administration console.
-
To preview content, select View.
-
To retire a knowledge base, use the Archive option (bin icon). Archived knowledge bases are removed from agent access but remain in history for audit.
Integrations
|
What are Integrations for? Integrations connect your agent to external systems — so it can pull in data before a conversation starts, or push data out when one ends.
Integrations can be connected to platforms like Zapier, Make, N8N, or any custom service your team maintains. |
Integration Types
When creating a new integration, the app first asks you about the type. Currently, only the custom type is available.
Custom Integrations
There are three sub-types of custom integrations:
-
Before Session Webhooks
-
After Session Webhooks
-
MCP Servers
All these integrations are technology agnostic—both webhooks and MCP servers can be implemented in many ways, including various low-code/no-code platforms like N8N, Zapier, Make, and Workato. In the platform, you simply link your external services and tell the agent how it can use them.
Before Session Webhooks
They are triggered before the first speech of the conversation and are typically used to fetch data into the context. This includes customer profiles, relevant information from other systems, user preferences, predictions from other AI models, and more.
Configuration Fields
Description |
This text is added into the context of the language model before the actual output of the webhook. It should explain what kind of information the agent is getting from this source and/or how it should be used. |
URL |
URL of the webhook to be called before each session. This URL should return text that will be added into the context. |
Method |
HTTP method to be used with the webhook. Currently only POST is supported. |
Headers |
Key-value pairs that should be sent as HTTP headers. Typically used for authentication headers. |
Timeout |
How long (in milliseconds) the engine should wait for the webhook to finish. |
Retries |
How many times the engine will try in case of failure or timeout before giving up. If your action is not idempotent (it has side-effects that should not be repeated), you may want to disable retries (value = 0). |
Default Value |
If the webhook does not work (after all retries the service still fails or times out), this value is added into the context instead of the real response. This can inform the model that contextual data could not be retrieved, or it can be empty if the model does not need to know. |
Request Format
The webhook receives information in this format:
{
"userId": "<user_id>",
"deviceId": "<device_id>",
"agentRef": "<agent_ref>",
"sessionId": "<session_id>",
"attributes": {
"name": "<name_from_identity_provider>",
"email": "<email_from_identity_provider>"
}
}
|
The |
After Session Webhooks
After session webhooks are used to inform other services about the outcomes of the conversation and trigger various reactions. You can create a summary and send it via email, extract information and post it into a Slack channel, or update your CRM.
Configuration Fields
URL |
URL of the webhook to be called after each session. This URL should return text that will be added into the context. |
Method |
HTTP method to be used with the webhook. Currently only POST is supported. |
Headers |
Key-value pairs that should be sent as HTTP headers. Typically used for authentication headers. |
Timeout |
How long (in milliseconds) the engine should wait for the webhook to finish. |
Retries |
How many times the engine will try in case of failure or timeout before giving up. If your action is not idempotent (it has side-effects that should not be repeated), you may want to disable retries (value = 0). |
Request Format
Your webhook will receive information in this format:
{
"sessionId": "<session_id>",
"contentRef": "<agent_ref>",
"userAgent": "<client_version>",
"userId": "<user_id>",
"username": "<username>",
"transcript": "<transcript>",
"attributes": {
"name": "<name_from_identity_provider>",
"email": "<email_from_identity_provider>"
}
}
|
The |
MCP Servers
MCP (Model Context Protocol) is a standard mechanism used for agents to interact with other services and tools. While knowledge bases provide static content and webhooks trigger at specific conversation points, MCP servers provide dynamic, self-described tools that the agent (or more precisely, the Large Language Model) can choose to call during conversation.
MCP integrations are managed at the project level. Once connected, they are available to every Relational Agent inside that project. This makes them shared assets: one integration can power many agents without duplicate configuration.
When invoked in conversation, the MCP integration acts as an extension of the agent. The agent decides when to call it, passes the required context, and integrates the response back into the dialogue. This ensures that the technical exchange with the external system remains invisible to the end user — they experience a continuous, natural interaction, while the agent silently handles the back-end integration. MCP integrations are typically used to:
-
Connect agents to internal ticketing or CRM systems.
-
Allow secure calls to enterprise APIs.
-
Integrate specialised data or toolchains directly into conversational flows.
Configuration Fields
URL |
The endpoint URL of your MCP server. |
Transport Type |
Transport type used to communicate with the MCP server. Streamable HTTP is recommended. HTTP+SSE is supported for backward compatibility reasons. |
Auth Header Name |
The name of the authentication header if required by your MCP server. |
Auth Header Value |
The value of the authentication header if required by your MCP server. |
Selecting Tools
Once you’ve filled in the configuration, click Try Connection and select which tools from the linked MCP server should be added into the context.
|
Typically, you only want to add the tools that are actually needed into the context. Too many unnecessary tools could confuse your agent. |
Interactive Content
|
What is Interactive Content for? Sometimes conversation alone isn’t enough. Interactive content lets your agent show or collect structured content mid-conversation — without the user ever leaving the session.
You configure these interactions once at the project level, and any agent in the project can use them. When the agent decides the moment is right, it triggers the interaction — the app renders it, captures the user’s response, and hands it back to the agent to continue the conversation. |
Supported Interaction Types
PromethistAI supports a growing set of multimodal assets, including:
Choices |
Selectable tiles that let users choose between multiple options. |
Images |
Enable relational agents to display images during the conversation. |
Input Fields |
Structured forms for capturing details such as account numbers, contact information, or survey responses. |
Webpages |
Embedded views that let users interact with existing enterprise web tools without leaving the conversation. |
Handover |
Transfer the conversation to another relational agent, ending the current session and starting a new one with the selected agent. |
Video |
Enable relational agents to play video content during the conversation. |
Handover Interaction
The Handover interaction allows agents to transfer conversations to other relational agents. When triggered, the current session ends and a new conversation begins with the target agent.
This is useful for:
-
Escalating conversations to specialist agents
-
Routing users based on their needs or intent
-
Transferring to agents with specific expertise or capabilities
Configuration
When creating a Handover interaction, configure the following fields:
Title |
The display name for this handover interaction (e.g., "Talk to relational agent") |
Tool Description |
Instructions for the agent about when and how to use this handover. This description helps the agent understand the purpose of the transfer (e.g., "End the session, switch to relational agent and start the conversation") |
Agent |
Choose the target agent for the handover: • Automatic - Let the system select the most appropriate agent based on context • Specific Agent - Select a particular agent (published or unpublished) |
Global |
Toggle on to make this handover available to all agents in the project. Toggle off to limit access to agents that explicitly reference it. |
Evaluations
|
What are Evaluations for? After each conversation, an evaluation can automatically score what happened — so you don’t have to listen to every recording or read every transcript to know how things are going. Example: Sales training — Your agent runs practice calls with sales reps. After each call, an evaluation scores how confident the rep sounded, whether they handled objections well, and what they should work on next. You see the results in Analytics without doing anything manually. Example: Customer support — Your agent handles support requests. An evaluation checks whether the customer’s issue was actually resolved and how much effort it took them. If a particular topic keeps coming up unresolved, you’ll see it. Evaluations run in two modes — pick one or both:
When using Post-Conversation Analysis, results can go to three places — choose any combination:
|
Creating an Evaluation
Navigate to Evaluations in the side menu and click + Create. Fill in the required fields:
|
As you configure the evaluation, a live preview panel appears on the right side of the page. It shows the output structure your evaluation will produce — including the JSON schema and contextual notes for each component you’ve enabled (Steering Logic, Admin Analytics, User Facing). Use it to verify your setup before saving. |
Evaluation Name |
A unique name to identify the evaluation. |
Description |
A short description of what the evaluation measures. |
Evaluation Purpose
The Evaluation Purpose determines when and how the evaluation runs. Both modes can be selected at the same time.
Real-time Steering |
Injects context into the active agent during the conversation. |
Post-Conversation Analysis |
Runs after the session ends for analytics, feedback, and coaching. |
Evaluation Prompt
This is where you define what the evaluation should look for and how it should interpret the results.
Real-time Steering: Look Back Window
When Real-time Steering is selected, a Look back window field appears. This controls how much conversation history the evaluation considers. Two options are available:
From start to now |
Includes the entire conversation up to the current point. This is the default. |
4 messages back to now |
Includes only the last 4 messages. Use this to focus the evaluation on recent exchanges rather than the full history. |
Output Data Elements
Click + Add element to define the specific data points the evaluation should extract and return. Each element has a Type that determines which fields are available:
Text
Variable name |
The name used to reference this output. |
Extraction prompt |
Describe what the evaluation should extract. |
Description |
Describe what this variable represents. |
Number
Variable name |
The name used to reference this output. |
Extraction prompt |
Describe what the evaluation should extract. |
Min value |
The minimum value on the scale. |
Max value |
The maximum value on the scale. |
Description |
Describe what this variable represents. |
List
Variable name |
The name used to reference this output. |
Extraction prompt |
Describe what the evaluation should extract. |
Limit to specific items |
Optionally restrict the output to a defined set of values. |
Description |
Describe what this variable represents. |
Boolean
Variable name |
The name used to reference this output. |
Extraction prompt |
Describe what the evaluation should extract. |
True label |
The label shown when the result is true. |
False label |
The label shown when the result is false. |
Description |
Describe what this variable represents. |
Post-Conversation Analysis: Additional Options
When Post-Conversation Analysis is selected, the following sections become available:
Agent Memory & Context
Session agent |
Updates context for the specific agent used in this session. |
Featured agent |
Updates the main Project Lead agent with new insights. |
Analytics & Visualization
User Facing |
Visualises results as insights to the end-user in the app. |
Admin Facing |
Logs data to the Analytics Suite for comprehensive reporting. |
3rd Party Systems (Webhook Integration)
Enter a webhook URL to forward evaluation results to an external system automatically.
Assets Visibility (Project-Wide)
Project assets can be configured as Global or Non-Global, giving you control over which agents can access them.
Global Assets
Assets marked as Global are available to all agents in the project automatically, without requiring explicit references.
-
All agents can choose to use Global assets, even if not mentioned in their purpose, business process, or guardrails
-
Includes Knowledge Bases, MCP Integrations and Interactive Content
-
Best for shared resources that all agents should have access to
Non-Global Assets
Assets that are not marked as Global are available only to agents that explicitly reference them using @mentions (e.g., @Support_FAQ).
-
Agent must include the asset reference in their Custom Configuration (purpose, business process, or guardrails)
-
Best for specialized resources that should be isolated to specific agents
-
Provides fine-grained control over asset access per agent within a project
Configuring Asset Visibility
To mark an asset as Global:
-
Navigate to the asset (Knowledge Bases, MCP Integrations or Interactive Content)
-
Open an existing asset or create a new one
-
Toggle the Global setting
-
Save changes
|
You can optionally @mention a specific asset (e.g.,
@mentions are never required they’re just for precision. |