mirror of
https://github.com/gristlabs/grist-core.git
synced 2024-10-27 20:44:07 +00:00
7fd48364df
Summary: In a nutshell: - More specific and helpful error messages are shown to the user - API requests are only retried when needed - The system deals with reaching the maximum token limit better, especially by switching to a model with a bigger limit In more detail: - `COMPLETION_MODEL` configuration has been removed. By default `gpt-3.5-turbo-0613` is used which accepts 4k tokens. If that's not enough, `gpt-3.5-turbo-16k-0613` is used instead. - Switching to the bigger model happens when either the prompt is too long by itself (the API immediately returns an error code) or the model reaches the 4k limit itself in the process of generating a response and thus returns an incomplete response. The latter case is made possible by removing the `max_tokens: 1500` in the request, which was very generous and would have lead to switching to the more expensive model more often than needed. The downside is that the user has to wait a bit longer for the response. - If the bigger 16k token limit is also exceeded, the assistant immediately responds (instead of retrying as before) with an error message including suggestions. The suggestions include restarting the conversation if and only if the user has sent multiple messages. - If a request fails because Grist has reached its OpenAI monthly billing quota, the assistant immediately responds (instead of retrying as before) with an error message suggesting that the user try again tomorrow. - If a request fails for some other reason, the assistant retries, and if all attempts fail then the user is told to try again in a few minutes and is shown the exact error message, including the API response if there is one. - Retrying only happens when an API request fails, whereas previously the system also retried errors from a much bigger scope which included calls to the sandbox. The downside is that the hugging face assistant no longer retries, although that code is currently disabled anyway. - The assistant no longer waits an additional second after the final retry attempt fails. Test Plan: Added a new server test file with several unit tests using faked OpenAI responses, including the happy path which wasn't really tested before. Reviewers: dsagal Reviewed By: dsagal Subscribers: dsagal Differential Revision: https://phab.getgrist.com/D3955
59 lines
1.6 KiB
TypeScript
59 lines
1.6 KiB
TypeScript
import {DocAction} from 'app/common/DocActions';
|
|
|
|
/**
|
|
* State related to a request for assistance.
|
|
*
|
|
* If an AssistanceResponse contains state, that state can be
|
|
* echoed back in an AssistanceRequest to continue a "conversation."
|
|
*
|
|
* Ideally, the state should not be modified or relied upon
|
|
* by the client, so as not to commit too hard to a particular
|
|
* model at this time (it is a bit early for that).
|
|
*/
|
|
export interface AssistanceState {
|
|
messages?: AssistanceMessage[];
|
|
}
|
|
|
|
export interface AssistanceMessage {
|
|
role: string;
|
|
content: string;
|
|
}
|
|
|
|
/**
|
|
* Currently, requests for assistance always happen in the context
|
|
* of the column of a particular table.
|
|
*/
|
|
export interface FormulaAssistanceContext {
|
|
type: 'formula';
|
|
tableId: string;
|
|
colId: string;
|
|
}
|
|
|
|
export type AssistanceContext = FormulaAssistanceContext;
|
|
|
|
/**
|
|
* A request for assistance.
|
|
*/
|
|
export interface AssistanceRequest {
|
|
context: AssistanceContext;
|
|
state?: AssistanceState;
|
|
text: string;
|
|
regenerate?: boolean; // Set if there was a previous request
|
|
// and response that should be omitted
|
|
// from history, or (if available) an
|
|
// alternative response generated.
|
|
}
|
|
|
|
/**
|
|
* A response to a request for assistance.
|
|
* The client should preserve the state and include it in
|
|
* any follow-up requests.
|
|
*/
|
|
export interface AssistanceResponse {
|
|
suggestedActions: DocAction[];
|
|
state?: AssistanceState;
|
|
// If the model can be trusted to issue a self-contained
|
|
// markdown-friendly string, it can be included here.
|
|
reply?: string;
|
|
}
|