gristlabs_grist-core/app
Alex Hall 7fd48364df (core) Improved error messages, retries, and handling of token limits in AI assistant
Summary:
In a nutshell:

- More specific and helpful error messages are shown to the user
- API requests are only retried when needed
- The system deals with reaching the maximum token limit better, especially by switching to a model with a bigger limit

In more detail:

- `COMPLETION_MODEL` configuration has been removed. By default `gpt-3.5-turbo-0613` is used which accepts 4k tokens. If that's not enough, `gpt-3.5-turbo-16k-0613` is used instead.
- Switching to the bigger model happens when either the prompt is too long by itself (the API immediately returns an error code) or the model reaches the 4k limit itself in the process of generating a response and thus returns an incomplete response. The latter case is made possible by removing the `max_tokens: 1500` in the request, which was very generous and would have lead to switching to the more expensive model more often than needed. The downside is that the user has to wait a bit longer for the response.
- If the bigger 16k token limit is also exceeded, the assistant immediately responds (instead of retrying as before) with an error message including suggestions. The suggestions include restarting the conversation if and only if the user has sent multiple messages.
- If a request fails because Grist has reached its OpenAI monthly billing quota, the assistant immediately responds (instead of retrying as before) with an error message suggesting that the user try again tomorrow.
- If a request fails for some other reason, the assistant retries, and if all attempts fail then the user is told to try again in a few minutes and is shown the exact error message, including the API response if there is one.
- Retrying only happens when an API request fails, whereas previously the system also retried errors from a much bigger scope which included calls to the sandbox. The downside is that the hugging face assistant no longer retries, although that code is currently disabled anyway.
- The assistant no longer waits an additional second after the final retry attempt fails.

Test Plan: Added a new server test file with several unit tests using faked OpenAI responses, including the happy path which wasn't really tested before.

Reviewers: dsagal

Reviewed By: dsagal

Subscribers: dsagal

Differential Revision: https://phab.getgrist.com/D3955
2023-07-18 16:01:37 +02:00
..
client (core) deleting queue from single webhook 2023-07-18 11:46:10 +02:00
common (core) Improved error messages, retries, and handling of token limits in AI assistant 2023-07-18 16:01:37 +02:00
gen-server (core) deleting queue from single webhook 2023-07-18 11:46:10 +02:00
plugin Use relative imports only in plugin folder (#328) 2022-10-26 10:41:38 -04:00
server (core) Improved error messages, retries, and handling of token limits in AI assistant 2023-07-18 16:01:37 +02:00
tsconfig.json (core) move home server into core 2020-07-21 20:39:10 -04:00