Commit Graph

17 Commits

Author SHA1 Message Date
Paul Fitzpatrick
b3a71374d1 (core) updates from grist-core 2023-08-21 09:15:53 -04:00
Paul Fitzpatrick
0be858c19d
allow AI Assistance to run against any chat-completion-style endpoint (#630)
This adds an ASSISTANT_CHAT_COMPLETION_ENDPOINT which can be used
to enable AI Assistance instead of an OpenAI API key. The assistant
then works against compatible endpoints, in the mechanical sense.
Quality of course will depend on the model. I found some tweaks
to the prompt that work well both for Llama-2 and for OpenAI's models,
but I'm not including them here because they would conflict with some
prompt changes that are already in the works.

Co-authored-by: Alex Hall <alex.mojaki@gmail.com>
2023-08-18 16:14:42 -04:00
Alex Hall
a1d31e41ad (core) Prevent the AI assistant response from including class definitions
Summary:
Sometimes the model repeats the classes given in the prompt which would mess up extracting the actual formula. This diff solves this by:

1. Changes the generated Python schema so that (a) the thing that needs completing is a plain top level function instead of a property/method inside the class and (2) the classes are fully valid syntax, which makes it easier to
2. Remove classes from the parsed Python code when converting the completion to a formula.
3. Tweak the prompt wording to discourage including classes in general, especially because sometimes the model tries to solve the problem by defining extra methods/attributes/classes.

While I was at it, I changed type hints to use builtins (e.g. `list` instead of `List`) to prevent `from typing import List` which was happening sometimes and would look weird in a formula. Similarly I removed `@dataclass` since that also implies an import, and this also fits with the tweaked wording that the classes are fake.

Test Plan:
Added a new test case to the formula dataset which triggers the unwanted behaviour. The factors that seem to trigger the problem are (1) a small schema so the classes are easier to repeat and (2) the need to import modules, which the model wants to place before all other code. The case failed before this diff and succeeded after. The tweaked wording reduces the chances of repeating the classes but didn't eliminate it, so forcibly removing the classes in Python was needed.

There were also a couple of other existing cases where repeating the classes was observed before but not after.

Overall the score increased from 49 to 51 out of 69 (including the new case). At one point the score was 53, but changes in whitespace were enough to make it drop again.

Reviewers: georgegevoian

Reviewed By: georgegevoian

Differential Revision: https://phab.getgrist.com/D4000
2023-08-18 12:50:09 +02:00
George Gevoian
05c15e4ec3 (core) Show tweaked formula in AI responses
Summary:
The formula that's used when the Apply button is clicked, and the formula that's
shown in responses from the Formula Assistant should now be the same. Previously, they
would differ slightly.

Test Plan: Server tests.

Reviewers: alexmojaki

Reviewed By: alexmojaki

Subscribers: alexmojaki

Differential Revision: https://phab.getgrist.com/D3977
2023-08-04 01:31:31 -07:00
Alex Hall
3f71c9c488 (core) Cleanup: Remove unused AssistanceRequest.regenerate
Summary: Finish what was started in https://phab.getgrist.com/D3970

Test Plan: existing tests

Reviewers: paulfitz

Reviewed By: paulfitz

Differential Revision: https://phab.getgrist.com/D3972
2023-07-26 15:51:46 +02:00
Alex Hall
391c8ee087 (core) Allow assistant to evaluate current formula
Summary:
Replaces https://phab.getgrist.com/D3940, particularly to avoid doing potentially unwanted things automatically.

Adds optional fields `evaluateCurrentFormula?: boolean; rowId?: number` to `FormulaAssistanceContext` (part of `AssistanceRequest`). When `evaluateCurrentFormula` is `true`, calls a new function `evaluate_formula` in the sandbox which computes the existing formula in the column (regardless of anything the AI may have suggested) and uses that to generate an additional system message which is added before the user's message. In theory this could be used in an interface where users ask why a formula doesn't work, including possibly a formula suggested by the AI. For now, it's only used in `runCompletion_impl.ts` for experimenting.

Also cleaned up a bit, removing `_chatMode` which is always `true` now, and uses of `regenerate` which is always `false`.

Test Plan: Updated `runCompletion_impl` to optionally use the new feature, in which case it now scores 51/68 instead of 49/68.

Reviewers: paulfitz

Reviewed By: paulfitz

Differential Revision: https://phab.getgrist.com/D3970
2023-07-24 21:59:00 +02:00
Alex Hall
5a703a1972 (core) Send hash of user ID in OpenAI API requests
Summary: Following recommendation in https://platform.openai.com/docs/guides/safety-best-practices/end-user-ids

Test Plan: Checked that running server test shows log of hash of 'user id' (which is null because it's a fake session)

Reviewers: dsagal

Reviewed By: dsagal

Subscribers: paulfitz, georgegevoian

Differential Revision: https://phab.getgrist.com/D3958
2023-07-20 19:50:41 +02:00
George Gevoian
0a34292536 (core) Add telemetry for AI Assistant
Summary: Also fixes a few bugs with some telemetry events not being recorded.

Test Plan: Manual.

Reviewers: paulfitz

Reviewed By: paulfitz

Differential Revision: https://phab.getgrist.com/D3960
2023-07-20 12:50:26 -04:00
Alex Hall
7fd48364df (core) Improved error messages, retries, and handling of token limits in AI assistant
Summary:
In a nutshell:

- More specific and helpful error messages are shown to the user
- API requests are only retried when needed
- The system deals with reaching the maximum token limit better, especially by switching to a model with a bigger limit

In more detail:

- `COMPLETION_MODEL` configuration has been removed. By default `gpt-3.5-turbo-0613` is used which accepts 4k tokens. If that's not enough, `gpt-3.5-turbo-16k-0613` is used instead.
- Switching to the bigger model happens when either the prompt is too long by itself (the API immediately returns an error code) or the model reaches the 4k limit itself in the process of generating a response and thus returns an incomplete response. The latter case is made possible by removing the `max_tokens: 1500` in the request, which was very generous and would have lead to switching to the more expensive model more often than needed. The downside is that the user has to wait a bit longer for the response.
- If the bigger 16k token limit is also exceeded, the assistant immediately responds (instead of retrying as before) with an error message including suggestions. The suggestions include restarting the conversation if and only if the user has sent multiple messages.
- If a request fails because Grist has reached its OpenAI monthly billing quota, the assistant immediately responds (instead of retrying as before) with an error message suggesting that the user try again tomorrow.
- If a request fails for some other reason, the assistant retries, and if all attempts fail then the user is told to try again in a few minutes and is shown the exact error message, including the API response if there is one.
- Retrying only happens when an API request fails, whereas previously the system also retried errors from a much bigger scope which included calls to the sandbox. The downside is that the hugging face assistant no longer retries, although that code is currently disabled anyway.
- The assistant no longer waits an additional second after the final retry attempt fails.

Test Plan: Added a new server test file with several unit tests using faked OpenAI responses, including the happy path which wasn't really tested before.

Reviewers: dsagal

Reviewed By: dsagal

Subscribers: dsagal

Differential Revision: https://phab.getgrist.com/D3955
2023-07-18 16:01:37 +02:00
Jarosław Sadziński
d13b9b9019 (core) Billing for formula assistant
Summary:
Adding limits for AI calls and connecting those limits with a Stripe Account.

- New table in homedb called `limits`
- All calls to the AI are not routed through DocApi and measured.
- All products now contain a special key `assistantLimit`, with a default value 0
- Limit is reset every time the subscription has changed its period
- The billing page is updated with two new options that describe the AI plan
- There is a new popup that allows the user to upgrade to a higher plan
- Tiers are read directly from the Stripe product with a volume pricing model

Test Plan: Updated and added

Reviewers: georgegevoian, paulfitz

Reviewed By: georgegevoian

Subscribers: dsagal

Differential Revision: https://phab.getgrist.com/D3907
2023-07-10 13:24:08 +02:00
Alex Hall
bb7cf6ba20 (core) Modify prompt so that model may say it cannot help with certain requests.
Summary:
This tweaks the prompting so that the user's message is given on its own instead of as a docstring within Python. This is so that the prompt makes sense when:

- the user asks a question such as "Can you write me a formula which does ...?" rather than describing their formula as a docstring would, or
- the user sends a message that doesn't ask for a formula at all (https://grist.slack.com/archives/C0234CPPXPA/p1687699944315069?thread_ts=1687698078.832209&cid=C0234CPPXPA)

Also added wording for the model to refuse when the user asks for something that the model cannot do.

Because the code (and maybe in some cases the model) for non-ChatGPT models relies on the prompt consisting entirely of Python code produced by the data engine (which no longer contains the user's message) those code paths have been disabled for now. Updating them now seems like undesirable drag, I think it'd be better to revisit this when iteration/experimentation has slowed down and stabilised.

Test Plan:
Added entries to the formula dataset where the response shouldn't contain a formula, indicated by the value `1` for the new column `no_formula`.

This is somewhat successful, as the model does refuse to help in some of the new test cases, but not all. Performance on existing entries also seems a bit worse, but it's hard to distinguish this from random noise. Hopefully this can be remedied in the future with more work, e.g. automatic followup messages containing example inputs and outputs.

Reviewers: paulfitz

Reviewed By: paulfitz

Subscribers: dsagal

Differential Revision: https://phab.getgrist.com/D3936
2023-06-27 15:57:56 +02:00
Alex Hall
52469c5a7e (core) Improve parsing formula from completion
Summary:
The previous code for extracting a Python formula from the LLM completion involved some shaky string manipulation which this improves on.
Overall the 'test results' from `runCompletion` went from 37/47 to 45/47 for `gpt-3.5-turbo-0613`.

The biggest problem that motivated these changes was that it assumed that code was always inside a markdown code block
(i.e. triple backticks) and so if there was no block there was no code. But the completion often consists of *only* code
with no accompanying explanation or markdown. By parsing the completion in Python instead of JS,
we can easily check if the entire completion is valid Python syntax and accept it if it is.

I also noticed one failure resulting from the completion containing the full function (instead of just the body)
and necessary imports before that function instead of inside. The new parsing moves import inside.

Test Plan: Added a Python unit test

Reviewers: paulfitz

Reviewed By: paulfitz

Subscribers: paulfitz

Differential Revision: https://phab.getgrist.com/D3922
2023-06-16 13:38:20 +02:00
Jarosław Sadziński
da323fb741 (core) Floating formula editor
Summary:
Adding a way to detach an editor. Initially only implemented for the formula editor, includes redesign for the AI part.
- Initially, the detached editor is tight with the formula assistant and both are behind GRIST_FORMULA_ASSISTANT flag, but this can be relaxed
later on, as the detached editor can be used on its own.

- Detached editor is only supported in regular fields and on the creator panel. It is not supported yet for conditional styles, due to preview limitations.
- Old code for the assistant was removed completely, as it was only a temporary solution, but the AI conversation part was copied to the new one.
- Prompting was not modified in this diff, it will be included in the follow-up with more test cases.

Test Plan: Added only new tests; existing tests should pass.

Reviewers: JakubSerafin

Reviewed By: JakubSerafin

Differential Revision: https://phab.getgrist.com/D3863
2023-06-02 17:59:22 +02:00
Paul Fitzpatrick
51a195bd94
add support for conversational state to assistance endpoint (#506)
* add support for conversational state to assistance endpoint

This refactors the assistance code somewhat, to allow carrying
along some conversational state. It extends the OpenAI-flavored
assistant to make use of that state to have a conversation.
The front-end is tweaked a little bit to allow for replies that
don't have any code in them (though I didn't get into formatting
such replies nicely).

Currently tested primarily through the runCompletion script,
which has been extended a bit to allow testing simulated
conversations (where an error is pasted in follow-up, or
an expected-vs-actual comparison).

Co-authored-by: George Gevoian <85144792+georgegevoian@users.noreply.github.com>
2023-05-08 14:15:22 -04:00
Jarosław Sadziński
d29770511c (core) Draft version of AI assistant
Summary:
The feature is behind a flag GRIST_FORMULA_ASSISTANT (must be "true"). But can be enabled in the
developer console by invoking GRIST_FORMULA_ASSISTANT.set(true).

Keys can be overriden in the document settings page.

Test Plan: For now just a stub test that checks if this feature is disabled by default.

Reviewers: paulfitz

Reviewed By: paulfitz

Subscribers: dsagal

Differential Revision: https://phab.getgrist.com/D3815
2023-03-24 10:07:26 +01:00
Cyprien P
1ff93f89c2 (core) Porting the AI evaluation script
Summary:
Porting script that run an evaluation against our formula dataset.

To test you need an openai key (see here: https://platform.openai.com/)
or hugging face (it should work as well), then checkout the branch and run

`OPENAI_API_KEY=<my_openai_api_key> node core/test/formula-dataset/runCompletion.js`

Test Plan:
Needs manually testing: so far there is no plan to make it part of CI.

The current score is somewhere around 34 successful prompts over a total of 47.

Reviewers: paulfitz

Reviewed By: paulfitz

Subscribers: jarek

Differential Revision: https://phab.getgrist.com/D3816
2023-03-15 14:54:28 +01:00
Jarosław Sadziński
6e3f0f2b35 (core) Porting back AI formula backend
Summary: This is a backend part for the formula AI.

Test Plan: New tests

Reviewers: paulfitz

Reviewed By: paulfitz

Subscribers: cyprien

Differential Revision: https://phab.getgrist.com/D3786
2023-02-08 17:15:59 +01:00