Summary:
for users who don't automatically have deep rights
to the document, provide them with attachment metadata only
for rows they have access to. This is a little tricky to
do efficiently. We provide attachment metadata when an
individual table is fetched, rather than on initial document
load, so we don't block that load on a full document scan.
We provide attachment metadata to a client when we see that
we are shipping rows mentioning particular attachments,
without making any effort to keep track of the metadata they
already have.
Test Plan: updated tests
Reviewers: dsagal, jarek
Reviewed By: dsagal, jarek
Differential Revision: https://phab.getgrist.com/D3722
Summary:
This is just a convenience for myself. I happen to have a version of
gvisor on my Linux dev machine that differs from what we use in our
containers. There's a small difference in user setup that only manifests
itself when importing files. Grist uses a directory readable only by
the creating user, created outside the container, and then accessed
within the container. For that to work, the user identities have to
line up exactly. This adds a flag I can set in my environment to make
things work. An alternative solution that doesn't require a flag
would be to make the temporary directories readable by other users,
but that seemed a bigger change than justified.
Ideally we'd make a very robust and easy to run sandbox for Linux
users, and I have ideas there for the future.
Test Plan: manual
Reviewers: dsagal
Reviewed By: dsagal
Differential Revision: https://phab.getgrist.com/D3742
Summary:
Rows in the _grist_Attachments table have a special lifecycle,
being created by a special method, and deleted via a special
process. All other modifications are now rejected, for simplicity.
Test Plan: added test
Reviewers: dsagal, jarek
Reviewed By: dsagal, jarek
Differential Revision: https://phab.getgrist.com/D3712
Summary:
This allows limiting the memory available to documents in the sandbox when gvisor is used. If memory limit is exceeded, we offer to open doc in recovery mode. Recovery mode is tweaked to open docs with tables in "ondemand" mode, which will generally take less memory and allow for deleting rows.
The limit is on the size of the virtual address space available to the sandbox (`RLIMIT_AS`), which in practice appears to function as one would want, and is the only practical option. There is a documented `RLIMIT_RSS` limit to `specifies the limit (in bytes) of the process's resident set (the number of virtual pages resident in RAM)` but this is no longer enforced by the kernel (neither the host nor gvisor).
When the sandbox runs out of memory, there are many ways it can fail. This diff catches all the ones I saw, but there could be more.
Test Plan: added tests
Reviewers: alexmojaki
Reviewed By: alexmojaki
Subscribers: alexmojaki
Differential Revision: https://phab.getgrist.com/D3398
Summary:
Currently, we have two ways that we deliver Grist. One is grist-core,
which has simple defaults and is relatively easy for third parties to
deploy. The second is our internal build for our SaaS, which is the
opposite. For self-managed Grist, a planned paid on-premise version
of Grist, I adopt the following approach:
* Use the `grist-core` build mechanism, extending it to accept an
overlay of extra code if present.
* Extra code is supplied in a self-contained `ext` directory, with
an `ext/app` directory that is of same structure as core `app`
and `stubs/app`.
* The `ext` directory also contains information about extra
node dependencies needed beyond that of `grist-core`.
* The `ext` directory is contained within our monorepo rather than
`grist-core` since it may contain material not under the Apache
license.
Docker builds are achieved in our monorepo by using the `--build-context`
functionality to add in `ext` during the regular `grist-core` build:
```
docker buildx build --load -t gristlabs/grist-ee --build-context=ext=../ext .
```
Incremental builds in our monorepo are achieved with the `build_core.sh` helper,
like:
```
buildtools/build_core.sh /tmp/self-managed
cd /tmp/self-managed
yarn start
```
The initial `ext` directory contains material for snapshotting to S3.
If you build the docker image as above, and have S3 access, you can
do something like:
```
docker run -p 8484:8484 --env GRIST_SESSION_SECRET=a-secret \
--env GRIST_DOCS_S3_BUCKET=grist-docs-test \
--env GRIST_DOCS_S3_PREFIX=self-managed \
-v $HOME/.aws:/root/.aws -it gristlabs/grist-ee
```
This will start a version of Grist that is like `grist-core` but with
S3 snapshots enabled. To release this code to `grist-core`, it would
just need to move from `ext/app` to `app` within core.
I tried a lot of ways of organizing self-managed Grist, and this was
what made me happiest. There are a lot of trade-offs, but here is what
I was looking for:
* Only OSS-code in grist-core. Adding mixed-license material there
feels unfair to people already working with the repo. That said,
a possible future is to move away from our private monorepo to
a public mixed-licence repo, which could have the same relationship
with grist-core as the monorepo has.
* Minimal differences between self-managed builds and one of our
existing builds, ideally hewing as close to grist-core as possible
for ease of documentation, debugging, and maintenance.
* Ideally, docker builds without copying files around (the new
`--build-context` functionality made that possible).
* Compatibility with monorepo build.
Expressing dependencies of the extra code in `ext` proved tricky to
do in a clean way. Yarn/npm fought me every step of the way - everything
related to optional dependencies was unsatisfactory in some respect.
Yarn2 is flexible but smells like it might be overreach. In the end,
organizing to install non-core dependencies one directory up from the
main build was a good simple trick that saved my bacon.
This diff gets us to the point of building `grist-ee` images conveniently,
but there isn't a public repo people can go look at to see its source. This
could be generated by taking `grist-core`, adding the `ext` directory
to it, and pushing to a distinct repository. I'm not in a hurry to do that,
since a PR to that repo would be hard to sync with our monorepo and
`grist-core`. Also, we don't have any licensing text ready for the `ext`
directory. So leaving that for future work.
Test Plan: manual
Reviewers: georgegevoian, alexmojaki
Reviewed By: georgegevoian, alexmojaki
Differential Revision: https://phab.getgrist.com/D3415
Summary:
This adds support for gvisor sandboxing in core. When Grist is run outside of a container, regular gvisor can be used (if on linux), and will run in rootless mode. When Grist is run inside a container, docker's default policy is insufficient for running gvisor, so a fork of gvisor is used that has less defence-in-depth but can run without privileges.
Sandboxing is automatically turned on in the Grist core container. It is not turned on automatically when built from source, since it is operating-system dependent.
This diff may break a complex method of testing Grist with gvisor on macs that I may have been the only person using. If anyone complains I'll find time on a mac to fix it :)
This diff includes a small "easter egg" to force document loads, primarily intended for developer use.
Test Plan: existing tests pass; checked that core and saas docker builds function
Reviewers: alexmojaki
Reviewed By: alexmojaki
Subscribers: alexmojaki
Differential Revision: https://phab.getgrist.com/D3333