gristlabs_grist-core/documentation/llm.md

36 lines
1.6 KiB
Markdown
Raw Normal View History

# Using Large Language Models with Grist
In this experimental Grist feature, originally developed by Alex Hall,
(core) Modify prompt so that model may say it cannot help with certain requests. Summary: This tweaks the prompting so that the user's message is given on its own instead of as a docstring within Python. This is so that the prompt makes sense when: - the user asks a question such as "Can you write me a formula which does ...?" rather than describing their formula as a docstring would, or - the user sends a message that doesn't ask for a formula at all (https://grist.slack.com/archives/C0234CPPXPA/p1687699944315069?thread_ts=1687698078.832209&cid=C0234CPPXPA) Also added wording for the model to refuse when the user asks for something that the model cannot do. Because the code (and maybe in some cases the model) for non-ChatGPT models relies on the prompt consisting entirely of Python code produced by the data engine (which no longer contains the user's message) those code paths have been disabled for now. Updating them now seems like undesirable drag, I think it'd be better to revisit this when iteration/experimentation has slowed down and stabilised. Test Plan: Added entries to the formula dataset where the response shouldn't contain a formula, indicated by the value `1` for the new column `no_formula`. This is somewhat successful, as the model does refuse to help in some of the new test cases, but not all. Performance on existing entries also seems a bit worse, but it's hard to distinguish this from random noise. Hopefully this can be remedied in the future with more work, e.g. automatic followup messages containing example inputs and outputs. Reviewers: paulfitz Reviewed By: paulfitz Subscribers: dsagal Differential Revision: https://phab.getgrist.com/D3936
2023-06-27 11:39:15 +00:00
you can hook up OpenAI's ChatGPT to write formulas for
you. Here's how.
(core) Modify prompt so that model may say it cannot help with certain requests. Summary: This tweaks the prompting so that the user's message is given on its own instead of as a docstring within Python. This is so that the prompt makes sense when: - the user asks a question such as "Can you write me a formula which does ...?" rather than describing their formula as a docstring would, or - the user sends a message that doesn't ask for a formula at all (https://grist.slack.com/archives/C0234CPPXPA/p1687699944315069?thread_ts=1687698078.832209&cid=C0234CPPXPA) Also added wording for the model to refuse when the user asks for something that the model cannot do. Because the code (and maybe in some cases the model) for non-ChatGPT models relies on the prompt consisting entirely of Python code produced by the data engine (which no longer contains the user's message) those code paths have been disabled for now. Updating them now seems like undesirable drag, I think it'd be better to revisit this when iteration/experimentation has slowed down and stabilised. Test Plan: Added entries to the formula dataset where the response shouldn't contain a formula, indicated by the value `1` for the new column `no_formula`. This is somewhat successful, as the model does refuse to help in some of the new test cases, but not all. Performance on existing entries also seems a bit worse, but it's hard to distinguish this from random noise. Hopefully this can be remedied in the future with more work, e.g. automatic followup messages containing example inputs and outputs. Reviewers: paulfitz Reviewed By: paulfitz Subscribers: dsagal Differential Revision: https://phab.getgrist.com/D3936
2023-06-27 11:39:15 +00:00
First, you need an API key. Visit https://openai.com/api/ and prepare a key, then
store it in an environment variable `OPENAI_API_KEY`.
That's all the configuration needed!
Currently it is only a backend feature, we are still working on the UI for it.
(core) Modify prompt so that model may say it cannot help with certain requests. Summary: This tweaks the prompting so that the user's message is given on its own instead of as a docstring within Python. This is so that the prompt makes sense when: - the user asks a question such as "Can you write me a formula which does ...?" rather than describing their formula as a docstring would, or - the user sends a message that doesn't ask for a formula at all (https://grist.slack.com/archives/C0234CPPXPA/p1687699944315069?thread_ts=1687698078.832209&cid=C0234CPPXPA) Also added wording for the model to refuse when the user asks for something that the model cannot do. Because the code (and maybe in some cases the model) for non-ChatGPT models relies on the prompt consisting entirely of Python code produced by the data engine (which no longer contains the user's message) those code paths have been disabled for now. Updating them now seems like undesirable drag, I think it'd be better to revisit this when iteration/experimentation has slowed down and stabilised. Test Plan: Added entries to the formula dataset where the response shouldn't contain a formula, indicated by the value `1` for the new column `no_formula`. This is somewhat successful, as the model does refuse to help in some of the new test cases, but not all. Performance on existing entries also seems a bit worse, but it's hard to distinguish this from random noise. Hopefully this can be remedied in the future with more work, e.g. automatic followup messages containing example inputs and outputs. Reviewers: paulfitz Reviewed By: paulfitz Subscribers: dsagal Differential Revision: https://phab.getgrist.com/D3936
2023-06-27 11:39:15 +00:00
## Hugging Face and other OpenAI models (deactivated)
_Not currently available, needs some work to revive. These notes are only preserved as a reminder to ourselves of how this worked._
(core) Modify prompt so that model may say it cannot help with certain requests. Summary: This tweaks the prompting so that the user's message is given on its own instead of as a docstring within Python. This is so that the prompt makes sense when: - the user asks a question such as "Can you write me a formula which does ...?" rather than describing their formula as a docstring would, or - the user sends a message that doesn't ask for a formula at all (https://grist.slack.com/archives/C0234CPPXPA/p1687699944315069?thread_ts=1687698078.832209&cid=C0234CPPXPA) Also added wording for the model to refuse when the user asks for something that the model cannot do. Because the code (and maybe in some cases the model) for non-ChatGPT models relies on the prompt consisting entirely of Python code produced by the data engine (which no longer contains the user's message) those code paths have been disabled for now. Updating them now seems like undesirable drag, I think it'd be better to revisit this when iteration/experimentation has slowed down and stabilised. Test Plan: Added entries to the formula dataset where the response shouldn't contain a formula, indicated by the value `1` for the new column `no_formula`. This is somewhat successful, as the model does refuse to help in some of the new test cases, but not all. Performance on existing entries also seems a bit worse, but it's hard to distinguish this from random noise. Hopefully this can be remedied in the future with more work, e.g. automatic followup messages containing example inputs and outputs. Reviewers: paulfitz Reviewed By: paulfitz Subscribers: dsagal Differential Revision: https://phab.getgrist.com/D3936
2023-06-27 11:39:15 +00:00
~~To use a different OpenAI model such as `code-davinci-002` or `text-davinci-003`,
set the environment variable `COMPLETION_MODEL` to the name of the model.~~
~~Alternatively, there are many non-proprietary models hosted on Hugging Face.
At the time of writing, none can compare with OpenAI for use with Grist.
Things can change quickly in the world of AI though. So instead of OpenAI,
you can visit https://huggingface.co/ and prepare a key, then
store it in an environment variable `HUGGINGFACE_API_KEY`.~~
(core) Modify prompt so that model may say it cannot help with certain requests. Summary: This tweaks the prompting so that the user's message is given on its own instead of as a docstring within Python. This is so that the prompt makes sense when: - the user asks a question such as "Can you write me a formula which does ...?" rather than describing their formula as a docstring would, or - the user sends a message that doesn't ask for a formula at all (https://grist.slack.com/archives/C0234CPPXPA/p1687699944315069?thread_ts=1687698078.832209&cid=C0234CPPXPA) Also added wording for the model to refuse when the user asks for something that the model cannot do. Because the code (and maybe in some cases the model) for non-ChatGPT models relies on the prompt consisting entirely of Python code produced by the data engine (which no longer contains the user's message) those code paths have been disabled for now. Updating them now seems like undesirable drag, I think it'd be better to revisit this when iteration/experimentation has slowed down and stabilised. Test Plan: Added entries to the formula dataset where the response shouldn't contain a formula, indicated by the value `1` for the new column `no_formula`. This is somewhat successful, as the model does refuse to help in some of the new test cases, but not all. Performance on existing entries also seems a bit worse, but it's hard to distinguish this from random noise. Hopefully this can be remedied in the future with more work, e.g. automatic followup messages containing example inputs and outputs. Reviewers: paulfitz Reviewed By: paulfitz Subscribers: dsagal Differential Revision: https://phab.getgrist.com/D3936
2023-06-27 11:39:15 +00:00
~~The model used will default to `NovelAI/genji-python-6B` for
Hugging Face. There's no particularly great model for this application,
but you can try other models by setting an environment variable
`COMPLETION_MODEL` to `codeparrot/codeparrot` or
(core) Modify prompt so that model may say it cannot help with certain requests. Summary: This tweaks the prompting so that the user's message is given on its own instead of as a docstring within Python. This is so that the prompt makes sense when: - the user asks a question such as "Can you write me a formula which does ...?" rather than describing their formula as a docstring would, or - the user sends a message that doesn't ask for a formula at all (https://grist.slack.com/archives/C0234CPPXPA/p1687699944315069?thread_ts=1687698078.832209&cid=C0234CPPXPA) Also added wording for the model to refuse when the user asks for something that the model cannot do. Because the code (and maybe in some cases the model) for non-ChatGPT models relies on the prompt consisting entirely of Python code produced by the data engine (which no longer contains the user's message) those code paths have been disabled for now. Updating them now seems like undesirable drag, I think it'd be better to revisit this when iteration/experimentation has slowed down and stabilised. Test Plan: Added entries to the formula dataset where the response shouldn't contain a formula, indicated by the value `1` for the new column `no_formula`. This is somewhat successful, as the model does refuse to help in some of the new test cases, but not all. Performance on existing entries also seems a bit worse, but it's hard to distinguish this from random noise. Hopefully this can be remedied in the future with more work, e.g. automatic followup messages containing example inputs and outputs. Reviewers: paulfitz Reviewed By: paulfitz Subscribers: dsagal Differential Revision: https://phab.getgrist.com/D3936
2023-06-27 11:39:15 +00:00
`NinedayWang/PolyCoder-2.7B` or similar.~~
(core) Modify prompt so that model may say it cannot help with certain requests. Summary: This tweaks the prompting so that the user's message is given on its own instead of as a docstring within Python. This is so that the prompt makes sense when: - the user asks a question such as "Can you write me a formula which does ...?" rather than describing their formula as a docstring would, or - the user sends a message that doesn't ask for a formula at all (https://grist.slack.com/archives/C0234CPPXPA/p1687699944315069?thread_ts=1687698078.832209&cid=C0234CPPXPA) Also added wording for the model to refuse when the user asks for something that the model cannot do. Because the code (and maybe in some cases the model) for non-ChatGPT models relies on the prompt consisting entirely of Python code produced by the data engine (which no longer contains the user's message) those code paths have been disabled for now. Updating them now seems like undesirable drag, I think it'd be better to revisit this when iteration/experimentation has slowed down and stabilised. Test Plan: Added entries to the formula dataset where the response shouldn't contain a formula, indicated by the value `1` for the new column `no_formula`. This is somewhat successful, as the model does refuse to help in some of the new test cases, but not all. Performance on existing entries also seems a bit worse, but it's hard to distinguish this from random noise. Hopefully this can be remedied in the future with more work, e.g. automatic followup messages containing example inputs and outputs. Reviewers: paulfitz Reviewed By: paulfitz Subscribers: dsagal Differential Revision: https://phab.getgrist.com/D3936
2023-06-27 11:39:15 +00:00
~~If you are hosting a model yourself, host it as Hugging Face does,
and use `COMPLETION_URL` rather than `COMPLETION_MODEL` to
(core) Modify prompt so that model may say it cannot help with certain requests. Summary: This tweaks the prompting so that the user's message is given on its own instead of as a docstring within Python. This is so that the prompt makes sense when: - the user asks a question such as "Can you write me a formula which does ...?" rather than describing their formula as a docstring would, or - the user sends a message that doesn't ask for a formula at all (https://grist.slack.com/archives/C0234CPPXPA/p1687699944315069?thread_ts=1687698078.832209&cid=C0234CPPXPA) Also added wording for the model to refuse when the user asks for something that the model cannot do. Because the code (and maybe in some cases the model) for non-ChatGPT models relies on the prompt consisting entirely of Python code produced by the data engine (which no longer contains the user's message) those code paths have been disabled for now. Updating them now seems like undesirable drag, I think it'd be better to revisit this when iteration/experimentation has slowed down and stabilised. Test Plan: Added entries to the formula dataset where the response shouldn't contain a formula, indicated by the value `1` for the new column `no_formula`. This is somewhat successful, as the model does refuse to help in some of the new test cases, but not all. Performance on existing entries also seems a bit worse, but it's hard to distinguish this from random noise. Hopefully this can be remedied in the future with more work, e.g. automatic followup messages containing example inputs and outputs. Reviewers: paulfitz Reviewed By: paulfitz Subscribers: dsagal Differential Revision: https://phab.getgrist.com/D3936
2023-06-27 11:39:15 +00:00
point to the model on your own server rather than Hugging Face.~~