LLM Dialogue Selector
The LLMSelector
class is responsible for selecting the most appropriate subdialogue. It leverages the capabilities of a large language model (LLM) to achieve this objective. Below we detail how the LLM selector operates.
Overview
At the heart of the LLMSelector
is a large language model that processes the conversational context along with other relevant information to choose the best path in a dialogue tree. The selector takes into consideration various factors including the current context, dialogue history, user profile, and a set of predefined dialogues, each labeled with a high-level description.
The LLMSelector
class uses the large language model to generate a prompt that includes the base prompt text, a list of relevant node references from the dialogue library, a transcript of the current dialogue, any additional information provided, and a history of previously selected dialogues. This assembled prompt is then fed into the LLM, which returns a response representing the selected dialogue number.
Prompt
The LLM Selector is responsible for assembling a structured prompt that will be fed to the large language model for determining the next step in the dialogue. The constructed prompt is created using various pieces of contextual information and has the following structure:
Base Prompt: This is a predefined string that instructs the LLM on its main objective, which is to facilitate a meaningful, coherent, and engaging conversation.
Dialogue Library: A list of potential dialogue options each accompanied by a high-level description. These options are available for the LLM to choose from.
Transcript of Dialogue: A summary of the recent dialogue history, limited to a predefined number of turns, which gives the LLM a view of the ongoing conversation.
Additional Info: Optional parameter where any extra information can be added to guide the LLM in making a selection.
History of Selections: This part lists the history of previously selected dialogues to avoid repetitions and to maintain a coherent flow in the conversation.
Selection Instruction: The final part of the prompt instructs the LLM to output the number of the dialogue to continue the conversation.
You can modify Base Prompt and Additional Info. Moreover, you can change the description of selectable subdialogues. The Transcript of Dialogue, History of Selections and Selection Instruction are created automatically.
Example of a Complete Prompt
Below is an illustrative example of how a complete prompt might look:
Default Base Prompt
The default base prompt guides the LLM in selecting a dialogue, emphasizing meaningfulness, coherence, and engagingness as the key criteria. It advises against repeating dialogues and encourages the selection of dialogues that foster a more engaging conversation by asking open-ended questions or learning more about the user.
How to Use LLM Selector
In this example, we will guide you through setup of LLM Selector that can select conversation about latest sports events, philosophical concept of time or summer holiday destinations.
1. Create a dialogue
Create a dialogue according the picture below.
2. Write Dialogue Descriptions
Open each subdialogue node and create a description in R-code. Explain briefly what is the role, topic or goal of the dialogue. Prepend the description with Des:
. LLM selector recognize this text as dialogue description thanks to this prefix.
3. Define LLM Selector in Init Code
Open the Code and paste into it the following code:
The LLMSelector()
takes optional parameter llmConfig
in which you can specify configuration of large language model that will make the selection of the dialogue. You can for example specify, that you want to use GPT-4:
Moreover, LLMSelector() takes optional parameter basePrompt
through which you can specify the base part of the prompt.
And finally, the LLMSelector() takes optional parameter numTurns
that specify number of turns it uses to construct the transcript of dialogue.
4. Call LLM Selector in Function
Insert the following code into the upper function of the dialogue:
The ByeSpeech
refers to the name of Bye! speech node. This node serves as fallback if the selection fails.
There is an optional parameter additionalInfo
in the selectTransition()
function, which you can use to pass additional information into the prompt, like string representation of user profile for example.
5. Create Transition Back To Dialogue Selector Node
In order to make another selection after the selected dialogue ends, you have to make a transition back to Dialogue Selector Node. Open the bottom function and paste into it the following code:
The DialogueSelector
refers to the name of Dialogue Selector function from the previous step.
LLMSelector Constructor
The LLMSelector
class provides a constructor that allows you to initialize a new instance with specific configurations. Below we break down each parameter that can be passed to the constructor and what it represents:
Parameters
llmConfig: LLMConfig = LLMConfig()
Type:
LLMConfig
Default: An instance with default settings
Description: This parameter represents the configurations for the large language model (LLM). You can pass a custom
LLMConfig
object to alter the behavior of the LLM according to your requirements.
basePrompt: String = DEFAULT_BASE_PROMPT
Type:
String
Default: The
DEFAULT_BASE_PROMPT
constant defined in theLLMSelector
classDescription: The base prompt is a pre-defined instruction that guides the LLM in selecting an optimal dialogue option. It emphasizes facilitating a meaningful, coherent, and engaging conversation. You can provide a custom string to use as the base prompt to guide the LLM in a specific direction.
numTurns: Int = 40
Type:
Int
Default: 40
Description: Represents the number of previous dialogue turns to be included in the transcript section of the prompt. Adjusting this parameter affects how much of the conversation history the LLM considers while making a selection.
Last updated