Agents & Assets

Overview

By default, all agents within a project can access:

Knowledge Bases

Integrations

Multimodal Interactions

Extractors

Knowledge Bases

Knowledge bases are shared repositories of information that Relational Agents use to ground their responses in reliable facts. They function as structured content libraries within each project, enabling agents to answer questions accurately, perform complex reasoning, and remain consistent across conversations.

How Knowledge Bases Work in PromethistAI

Knowledge bases are project-level assets:

  • Every agent in a project inherits access to the same set of knowledge bases.

  • Updates or new uploads become immediately available to all agents in that project.

  • Knowledge bases are stored and parsed centrally, so administrators maintain one canonical version.

  • Renaming: You can change a knowledge base’s display name at any time. This does not affect its content or project-level availability. If an agent references a knowledge base (e.g., @old_name), it will be updated with the new reference @new_name.

When an agent receives a query, it combines relational intelligence with structured retrieval from the knowledge base. This allows it to adaptively pull facts, mirror user context, and deliver an answer that is both empathetic and precise.

If multiple knowledge bases are available, the agent will attempt to select the most relevant source. For best results, you can instruct the agent explicitly to use a particular knowledge base for specific topics.

Supported Formats

PromethistAI supports a wide range of enterprise file formats:

Text and Documents

PDF, TXT, DOC, DOCX, HTML

Data Files

CSV, XLSX

Presentations

PPTX

Images

JPG, JPEG, PNG, TIFF

When uploaded, each file is parsed so its contents are searchable in conversation.

Use Cases

Knowledge bases are designed to serve diverse business needs:

  • Customer Support: FAQ repositories, policy handbooks, escalation flows.

  • Sales and Retail: Product catalogs, promotional guides, price lists.

  • Technical Support: Troubleshooting manuals, diagnostic checklists, configuration notes.

  • Onboarding and Training: HR guidelines, compliance materials, training decks.

Best Practices

To get the most from knowledge bases:

  • Be explicit: Instruct an agent which knowledge base to consult for a given scenario. Example: "When answering questions about Company X, always use the Company_data knowledge base."

  • Keep content structured: Use clear headings, consistent formatting, and concise entries. Parsed documents with logical sections produce better retrieval quality.

  • Segment by purpose: Create separate knowledge bases for distinct domains (e.g. Support_FAQ, Retail_Catalog, Compliance_Guides).

  • Update regularly: Outdated documents stay live until removed. Review knowledge bases periodically to ensure accuracy.

Managing Knowledge Bases

Uploading

To add a new knowledge base:

  1. Click Knowledge Bases in the left sidebar.

  2. Select + Create in the top right corner.

  3. In the Add Knowledge window, choose how to import content:

    1. File Upload – drag and drop a file, or click Select File to browse.

    2. Import URL – provide a web address to ingest webpage content.

Once uploaded, the file is parsed automatically and made available to all agents in the project.

Viewing

Knowledge bases are accessible from the Knowledge Bases section in the left sidebar of the administration console.

  • To preview content, select View.

  • To retire a knowledge base, use the Archive option (bin icon). Archived knowledge bases are removed from agent access but remain in history for audit.

Integrations

This section lists all integrations and allows you to create new connections to external services and data sources.

Integration Types

When creating a new integration, the app first asks you about the type. Currently, only the custom type is available.

Custom Integrations

There are three sub-types of custom integrations:

  • Before Session Webhooks

  • After Session Webhooks

  • MCP Servers

All these integrations are technology agnostic—both webhooks and MCP servers can be implemented in many ways, including various low-code/no-code platforms like N8N, Zapier, Make, and Workato. In the platform, you simply link your external services and tell the agent how it can use them.

Before Session Webhooks

They are triggered before the first speech of the conversation and are typically used to fetch data into the context. This includes customer profiles, relevant information from other systems, user preferences, predictions from other AI models, and more.

Configuration Fields

Description

This text is added into the context of the language model before the actual output of the webhook. It should explain what kind of information the agent is getting from this source and/or how it should be used.

URL

URL of the webhook to be called before each session. This URL should return text that will be added into the context.

Method

HTTP method to be used with the webhook. Currently only POST is supported.

Headers

Key-value pairs that should be sent as HTTP headers. Typically used for authentication headers.

Timeout

How long (in milliseconds) the engine should wait for the webhook to finish.

Retries

How many times the engine will try in case of failure or timeout before giving up. If your action is not idempotent (it has side-effects that should not be repeated), you may want to disable retries (value = 0).

Default Value

If the webhook does not work (after all retries the service still fails or times out), this value is added into the context instead of the real response. This can inform the model that contextual data could not be retrieved, or it can be empty if the model does not need to know.

Request Format

The webhook receives information in this format:

{
  "userId": "<user_id>",
  "deviceId": "<device_id>",
  "agentRef": "<agent_ref>",
  "sessionId": "<session_id>",
  "attributes": {
    "name": "<name_from_identity_provider>",
    "email": "<email_from_identity_provider>"
  }
}

The name and email attributes are only present if the client has authenticated against an identity provider. This is optional for public projects but can be enforced.

After Session Webhooks

After session webhooks are used to inform other services about the outcomes of the conversation and trigger various reactions. You can create a summary and send it via email, extract information and post it into a Slack channel, or update your CRM.

Configuration Fields

URL

URL of the webhook to be called after each session. This URL should return text that will be added into the context.

Method

HTTP method to be used with the webhook. Currently only POST is supported.

Headers

Key-value pairs that should be sent as HTTP headers. Typically used for authentication headers.

Timeout

How long (in milliseconds) the engine should wait for the webhook to finish.

Retries

How many times the engine will try in case of failure or timeout before giving up. If your action is not idempotent (it has side-effects that should not be repeated), you may want to disable retries (value = 0).

Request Format

Your webhook will receive information in this format:

{
  "sessionId": "<session_id>",
  "contentRef": "<agent_ref>",
  "userAgent": "<client_version>",
  "userId": "<user_id>",
  "username": "<username>",
  "transcript": "<transcript>",
  "attributes": {
    "name": "<name_from_identity_provider>",
    "email": "<email_from_identity_provider>"
  }
}

The name and email attributes are only present if the client has authenticated against an identity provider. This is optional for public projects but can be enforced.

MCP Servers

MCP (Model Context Protocol) is a standard mechanism used for agents to interact with other services and tools. While knowledge bases provide static content and webhooks trigger at specific conversation points, MCP servers provide dynamic, self-described tools that the agent (or more precisely, the Large Language Model) can choose to call during conversation.

MCP integrations are managed at the project level. Once connected, they are available to every Relational Agent inside that project. This makes them shared assets: one integration can power many agents without duplicate configuration.

When invoked in conversation, the MCP integration acts as an extension of the agent. The agent decides when to call it, passes the required context, and integrates the response back into the dialogue. This ensures that the technical exchange with the external system remains invisible to the end user — they experience a continuous, natural interaction, while the agent silently handles the back-end integration. MCP integrations are typically used to:

  • Connect agents to internal ticketing or CRM systems.

  • Allow secure calls to enterprise APIs.

  • Integrate specialised data or toolchains directly into conversational flows.

Configuration Fields

URL

The endpoint URL of your MCP server.

Transport Type

Transport type used to communicate with the MCP server. Streamable HTTP is recommended. HTTP+SSE is supported for backward compatibility reasons.

Auth Header Name

The name of the authentication header if required by your MCP server.

Auth Header Value

The value of the authentication header if required by your MCP server.

Selecting Tools

Once you’ve filled in the configuration, click Try Connection and select which tools from the linked MCP server should be added into the context.

Typically, you only want to add the tools that are actually needed into the context. Too many unnecessary tools could confuse your agent.

Multimodal Interactions

Multimodal interaction in PromethistAI allows Relational Agents to go beyond voice and text by embedding structured experiences directly inside the conversation. Instead of relying only on dialogue, agents can present users with interactive elements such as webpages, forms, or image choices.

These assets make interactions more efficient and engaging:

  • A support agent can display a troubleshooting flow with input fields.

  • A retail agent can show product images to choose from.

  • A training agent can present a knowledge check inline.

The user never leaves the conversation; all steps happen in one seamless flow.

Multimodal assets are configured at the project level and are automatically available to all agents in that project. Once published, they appear directly in the PromethistAI iOS and Android client apps during live conversations.

When an agent reaches a point in its process where structured input or display is needed, it calls the multimodal asset. The app then renders the corresponding interaction inline — whether it is a form, a web view, or an image selection. The user’s responses are captured and passed back to the agent, which can continue the dialogue using both conversational context and the structured input.

Because assets are shared at the project level, they can be reused across multiple agents. This ensures consistent design and reduces setup time when scaling projects.

Supported Interaction Types

PromethistAI supports a growing set of multimodal assets, including:

Webpages

Embedded views that let users interact with existing enterprise web tools without leaving the conversation.

Images

Enable relational agents to display images during the conversation.

Input Fields

Structured forms for capturing details such as account numbers, contact information, or survey responses.

Choices

Selectable tiles that let users choose between multiple options.

Extractors

What it is

Extractors are a new asset type that define the metric an agent should measure and optimize during a conversation (alongside the agent’s default relational goals).

Examples: "Sentiment toward my brand," "Customer understanding of our product offering," "Types of issues mentioned by users."

Why it matters

  • Make agent behavior goal-oriented.

  • Drive better outcomes by giving agents a clear metric to monitor and improve in real time.

How it works

You define an extractor with a name, function (prompt), output type, and an optional aggregation method. All agents in the project will use these extractors when planning and will try to improve the specified metric over the course of each conversation.

Set up an extractor

  1. Go to Project > Extractors.

  2. Click Create to add a new extractor or Edit to modify an existing one.

  3. Name the extractor. Use a clear, descriptive name the agent will see, for example:

    • Brand sentiment

    • Product knowledge level

    • Reported issue type

  4. Define the function. This is a short instruction that tells the agent what to measure and how to optimize it (increase or decrease).

    Examples:

    • "Measure customer sentiment toward MyCompany. Focus on sentiment about our physical products; ignore sentiment about our online services. Optimize for higher sentiment."

    • "Detect issues the user mentions. Acknowledge and address each issue as soon as it appears."

  5. Choose the output type. This determines the format of the extractor’s result:

    Text

    A free-text extraction.

Examples:

  • "Feedback the user gave about the new product tiers."

    Boolean (Yes/No)

    A binary outcome with clear meanings for Yes and No.

Examples:

  • Yes = "Likely a native speaker"

  • No = "Likely not a native speaker"

    Classification

    One value from a predefined set of categories (you can add short descriptions).

Examples (categories):

  • POOR_QUALITY — user says the service is unreliable or not working as expected

  • NOT_ACCESSIBLE — user cannot log in or reach the portal

  • TOO_EXPENSIVE — user says the price or bill is too high

  • NONE — user is satisfied; no issues raised

    Number

    A numeric scale. Minimum is always 1 and maximum is always 5. Define what each end of the scale means.

Examples (anchors):

  • "Strongly negative toward my brand"

  • "Strongly positive toward my brand"

    1. Select an aggregation method (optional). Aggregation smooths extractor results over time so the agent reacts to meaningful trends rather than momentary spikes. Depending on the output type, you’ll see options such as:

      None

      Use only the most recent raw value.

      Average

      Use the average value across the conversation.

      Moving average

      Use the average of the last five values.

      Trend

      Map the trend to a value between -1 and 1 (improving = 1, declining = -1).

      Z-score

      Show how far a value is from the mean, statistically.

      List

      Use the list of all extracted values.

      Histogram

      Use frequency counts of extracted values.

      Rolling window

      Use the last five values.

      Only aggregation methods suited to the extractor’s type will be available.

Best practices

  • Use no more than five extractors per project to avoid overloading the agent with too many optimization targets.

  • Keep names specific and the function prompt concise and actionable.

Assets Visibility (Project-Wide)

Project assets can be configured as Global or Non-Global, giving you control over which agents can access them.

Global Assets

Assets marked as Global are available to all agents in the project automatically, without requiring explicit references.

  • All agents can choose to use Global assets, even if not mentioned in their purpose, business process, or guardrails

  • Includes Knowledge Bases, MCP Integrations and Multimodal Interactions

  • Best for shared resources that all agents should have access to

Non-Global Assets

Assets that are not marked as Global are available only to agents that explicitly reference them using @mentions (e.g., @Support_FAQ).

  • Agent must include the asset reference in their Custom Configuration (purpose, business process, or guardrails)

  • Best for specialized resources that should be isolated to specific agents

  • Provides fine-grained control over asset access per agent within a project

Configuring Asset Visibility

To mark an asset as Global:

  1. Navigate to the asset (Knowledge Bases, MCP Integrations or Multimodal Interactions)

  2. Open an existing asset or create a new one

  3. Toggle the Global setting

  4. Save changes

You can optionally @mention a specific asset (e.g., @Support_FAQ) in an agent’s Custom Configuration to:

  • Guide the agent to prefer it

  • Avoid confusion with similar names

@mentions are never required they’re just for precision.