Commit Graph

4 Commits

Author SHA1 Message Date
Alex Hall
a1d31e41ad (core) Prevent the AI assistant response from including class definitions
Summary:
Sometimes the model repeats the classes given in the prompt which would mess up extracting the actual formula. This diff solves this by:

1. Changes the generated Python schema so that (a) the thing that needs completing is a plain top level function instead of a property/method inside the class and (2) the classes are fully valid syntax, which makes it easier to
2. Remove classes from the parsed Python code when converting the completion to a formula.
3. Tweak the prompt wording to discourage including classes in general, especially because sometimes the model tries to solve the problem by defining extra methods/attributes/classes.

While I was at it, I changed type hints to use builtins (e.g. `list` instead of `List`) to prevent `from typing import List` which was happening sometimes and would look weird in a formula. Similarly I removed `@dataclass` since that also implies an import, and this also fits with the tweaked wording that the classes are fake.

Test Plan:
Added a new test case to the formula dataset which triggers the unwanted behaviour. The factors that seem to trigger the problem are (1) a small schema so the classes are easier to repeat and (2) the need to import modules, which the model wants to place before all other code. The case failed before this diff and succeeded after. The tweaked wording reduces the chances of repeating the classes but didn't eliminate it, so forcibly removing the classes in Python was needed.

There were also a couple of other existing cases where repeating the classes was observed before but not after.

Overall the score increased from 49 to 51 out of 69 (including the new case). At one point the score was 53, but changes in whitespace were enough to make it drop again.

Reviewers: georgegevoian

Reviewed By: georgegevoian

Differential Revision: https://phab.getgrist.com/D4000
2023-08-18 12:50:09 +02:00
Alex Hall
bb7cf6ba20 (core) Modify prompt so that model may say it cannot help with certain requests.
Summary:
This tweaks the prompting so that the user's message is given on its own instead of as a docstring within Python. This is so that the prompt makes sense when:

- the user asks a question such as "Can you write me a formula which does ...?" rather than describing their formula as a docstring would, or
- the user sends a message that doesn't ask for a formula at all (https://grist.slack.com/archives/C0234CPPXPA/p1687699944315069?thread_ts=1687698078.832209&cid=C0234CPPXPA)

Also added wording for the model to refuse when the user asks for something that the model cannot do.

Because the code (and maybe in some cases the model) for non-ChatGPT models relies on the prompt consisting entirely of Python code produced by the data engine (which no longer contains the user's message) those code paths have been disabled for now. Updating them now seems like undesirable drag, I think it'd be better to revisit this when iteration/experimentation has slowed down and stabilised.

Test Plan:
Added entries to the formula dataset where the response shouldn't contain a formula, indicated by the value `1` for the new column `no_formula`.

This is somewhat successful, as the model does refuse to help in some of the new test cases, but not all. Performance on existing entries also seems a bit worse, but it's hard to distinguish this from random noise. Hopefully this can be remedied in the future with more work, e.g. automatic followup messages containing example inputs and outputs.

Reviewers: paulfitz

Reviewed By: paulfitz

Subscribers: dsagal

Differential Revision: https://phab.getgrist.com/D3936
2023-06-27 15:57:56 +02:00
Alex Hall
d88f79bc5e (core) Add cases involving lookups to formula dataset
Summary: I looked through the template documents mentioned in `formula-dataset-index.csv` and selected formulas involving lookups to add to the CSV, particularly nontrivial formulas.

Test Plan: Running the test script on the new dataset gives a score of 47/61 compared to the previous 45/47, i.e. it scores 2/14 on the new entries. Lookups are clearly challenging and we'll need to add more information to the prompt, maybe even consider a more complicated strategy than a single prompt. This diff is purely for expanding the dataset, improving performance will come later.

Reviewers: paulfitz

Reviewed By: paulfitz

Differential Revision: https://phab.getgrist.com/D3931
2023-06-26 13:18:52 +02:00
Cyprien P
1ff93f89c2 (core) Porting the AI evaluation script
Summary:
Porting script that run an evaluation against our formula dataset.

To test you need an openai key (see here: https://platform.openai.com/)
or hugging face (it should work as well), then checkout the branch and run

`OPENAI_API_KEY=<my_openai_api_key> node core/test/formula-dataset/runCompletion.js`

Test Plan:
Needs manually testing: so far there is no plan to make it part of CI.

The current score is somewhere around 34 successful prompts over a total of 47.

Reviewers: paulfitz

Reviewed By: paulfitz

Subscribers: jarek

Differential Revision: https://phab.getgrist.com/D3816
2023-03-15 14:54:28 +01:00