SubmitChatOptions

PropTypeDescription
systemPromptstring

The system prompt. Default: "You are a very enthusiastic company representative who loves to help people!".

contextany

Context to use for template variable replacements in the system prompt. Default: {}.

modelOpenAIModelId

The OpenAI model to use. Default: "gpt-4-turbo-preview".

policiesOptionsPoliciesOptions

Options for the use of policies.

retrievalOptionsRetrievalOptions

Options for retrieval.

outputFormat'markdown' | 'slack' | 'html'

The output format of the response. Default: "markdown".

jsonOutputboolean

If true, output the response in JSON format. Default: false.

redactboolean

Remove PII from chat messages. Default: false.

temperaturenumber

The model temperature. Default: 0.1.

topPnumber

The model top P. Default: 1.

frequencyPenaltynumber

The model frequency penalty. Default: 0.

presencePenaltynumber

The model present penalty. Default: 0.

maxTokensnumber

The max number of tokens to include in the response. Default: 500.

sectionsMatchCountnumber

The number of sections to include in the prompt context. Default: 10.

sectionsMatchThresholdnumber

The similarity threshold between the input question and selected sections. Default: 0.5.

threadIdstring

Thread ID. Returned with the first, and every subsequent, chat response. Used to continue a thread.

toolsChatCompletionTool[]

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for.

toolChoiceChatCompletionToolChoiceOption

Controls which (if any) function is called by the model. none means the model will not call a function and instead generates a message. auto means the model can pick between generating a message or calling a function. Specifying a particular function via {"type: "function", "function": {"name": "my_function"}} forces the model to call that function. none is the default when no functions are present. auto is the default if functions are present.

doNotInjectContextboolean

Whether or not to inject context relevant to the query. Default: false.

allowFollowUpQuestionsboolean

If true, the bot may encourage the user to ask a follow-up question, for instance to gather additional information. Default: true.

excludeFromInsightsboolean

Whether or not to include message in insights. Default: false.

signalAbortSignal

AbortController signal.

debugboolean

Enabled debug mode. This will log debug and error information to the console. Default: false.

streamboolean

Disable streaming and return the entire response at once.