Summary:
If the longer OpenAI model exceeds the OpenAPI context length, we now perform another retry with a
shorter variant of the formula prompt. The shorter prompt excludes non-referenced tables and lookup
method definitions, which should help reduce token usage in documents with larger schemas.
Test Plan: Server test.
Reviewers: JakubSerafin
Reviewed By: JakubSerafin
Subscribers: JakubSerafin
Differential Revision: https://phab.getgrist.com/D4184
Summary:
Replaced uses of asttokens.ASTTokens with asttokens.ASTText when working with plain `ast` trees, and use `atok.get_text_range` instead of `node.first_token`.
Upgraded asttokens in Python 2 (it was already upgraded in Python 3).
Test Plan: Added a test with f-strings.
Reviewers: dsagal
Reviewed By: dsagal
Differential Revision: https://phab.getgrist.com/D4001
Summary:
Sometimes the model repeats the classes given in the prompt which would mess up extracting the actual formula. This diff solves this by:
1. Changes the generated Python schema so that (a) the thing that needs completing is a plain top level function instead of a property/method inside the class and (2) the classes are fully valid syntax, which makes it easier to
2. Remove classes from the parsed Python code when converting the completion to a formula.
3. Tweak the prompt wording to discourage including classes in general, especially because sometimes the model tries to solve the problem by defining extra methods/attributes/classes.
While I was at it, I changed type hints to use builtins (e.g. `list` instead of `List`) to prevent `from typing import List` which was happening sometimes and would look weird in a formula. Similarly I removed `@dataclass` since that also implies an import, and this also fits with the tweaked wording that the classes are fake.
Test Plan:
Added a new test case to the formula dataset which triggers the unwanted behaviour. The factors that seem to trigger the problem are (1) a small schema so the classes are easier to repeat and (2) the need to import modules, which the model wants to place before all other code. The case failed before this diff and succeeded after. The tweaked wording reduces the chances of repeating the classes but didn't eliminate it, so forcibly removing the classes in Python was needed.
There were also a couple of other existing cases where repeating the classes was observed before but not after.
Overall the score increased from 49 to 51 out of 69 (including the new case). At one point the score was 53, but changes in whitespace were enough to make it drop again.
Reviewers: georgegevoian
Reviewed By: georgegevoian
Differential Revision: https://phab.getgrist.com/D4000
Summary:
Replaces https://phab.getgrist.com/D3940, particularly to avoid doing potentially unwanted things automatically.
Adds optional fields `evaluateCurrentFormula?: boolean; rowId?: number` to `FormulaAssistanceContext` (part of `AssistanceRequest`). When `evaluateCurrentFormula` is `true`, calls a new function `evaluate_formula` in the sandbox which computes the existing formula in the column (regardless of anything the AI may have suggested) and uses that to generate an additional system message which is added before the user's message. In theory this could be used in an interface where users ask why a formula doesn't work, including possibly a formula suggested by the AI. For now, it's only used in `runCompletion_impl.ts` for experimenting.
Also cleaned up a bit, removing `_chatMode` which is always `true` now, and uses of `regenerate` which is always `false`.
Test Plan: Updated `runCompletion_impl` to optionally use the new feature, in which case it now scores 51/68 instead of 49/68.
Reviewers: paulfitz
Reviewed By: paulfitz
Differential Revision: https://phab.getgrist.com/D3970
Summary:
This tweaks the prompting so that the user's message is given on its own instead of as a docstring within Python. This is so that the prompt makes sense when:
- the user asks a question such as "Can you write me a formula which does ...?" rather than describing their formula as a docstring would, or
- the user sends a message that doesn't ask for a formula at all (https://grist.slack.com/archives/C0234CPPXPA/p1687699944315069?thread_ts=1687698078.832209&cid=C0234CPPXPA)
Also added wording for the model to refuse when the user asks for something that the model cannot do.
Because the code (and maybe in some cases the model) for non-ChatGPT models relies on the prompt consisting entirely of Python code produced by the data engine (which no longer contains the user's message) those code paths have been disabled for now. Updating them now seems like undesirable drag, I think it'd be better to revisit this when iteration/experimentation has slowed down and stabilised.
Test Plan:
Added entries to the formula dataset where the response shouldn't contain a formula, indicated by the value `1` for the new column `no_formula`.
This is somewhat successful, as the model does refuse to help in some of the new test cases, but not all. Performance on existing entries also seems a bit worse, but it's hard to distinguish this from random noise. Hopefully this can be remedied in the future with more work, e.g. automatic followup messages containing example inputs and outputs.
Reviewers: paulfitz
Reviewed By: paulfitz
Subscribers: dsagal
Differential Revision: https://phab.getgrist.com/D3936
Summary: Converts `rec.` to `$` in AI generated formulas, and removes redundant `return` at the end.
Test Plan: Expanded unit test.
Reviewers: dsagal
Reviewed By: dsagal
Differential Revision: https://phab.getgrist.com/D3933
Summary:
The previous code for extracting a Python formula from the LLM completion involved some shaky string manipulation which this improves on.
Overall the 'test results' from `runCompletion` went from 37/47 to 45/47 for `gpt-3.5-turbo-0613`.
The biggest problem that motivated these changes was that it assumed that code was always inside a markdown code block
(i.e. triple backticks) and so if there was no block there was no code. But the completion often consists of *only* code
with no accompanying explanation or markdown. By parsing the completion in Python instead of JS,
we can easily check if the entire completion is valid Python syntax and accept it if it is.
I also noticed one failure resulting from the completion containing the full function (instead of just the body)
and necessary imports before that function instead of inside. The new parsing moves import inside.
Test Plan: Added a Python unit test
Reviewers: paulfitz
Reviewed By: paulfitz
Subscribers: paulfitz
Differential Revision: https://phab.getgrist.com/D3922
* add support for conversational state to assistance endpoint
This refactors the assistance code somewhat, to allow carrying
along some conversational state. It extends the OpenAI-flavored
assistant to make use of that state to have a conversation.
The front-end is tweaked a little bit to allow for replies that
don't have any code in them (though I didn't get into formatting
such replies nicely).
Currently tested primarily through the runCompletion script,
which has been extended a bit to allow testing simulated
conversations (where an error is pasted in follow-up, or
an expected-vs-actual comparison).
Co-authored-by: George Gevoian <85144792+georgegevoian@users.noreply.github.com>
Summary: This is a backend part for the formula AI.
Test Plan: New tests
Reviewers: paulfitz
Reviewed By: paulfitz
Subscribers: cyprien
Differential Revision: https://phab.getgrist.com/D3786