You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
gristlabs_grist-core/Dockerfile

146 lines
5.2 KiB

(core) add machinery for self-managed flavor of Grist Summary: Currently, we have two ways that we deliver Grist. One is grist-core, which has simple defaults and is relatively easy for third parties to deploy. The second is our internal build for our SaaS, which is the opposite. For self-managed Grist, a planned paid on-premise version of Grist, I adopt the following approach: * Use the `grist-core` build mechanism, extending it to accept an overlay of extra code if present. * Extra code is supplied in a self-contained `ext` directory, with an `ext/app` directory that is of same structure as core `app` and `stubs/app`. * The `ext` directory also contains information about extra node dependencies needed beyond that of `grist-core`. * The `ext` directory is contained within our monorepo rather than `grist-core` since it may contain material not under the Apache license. Docker builds are achieved in our monorepo by using the `--build-context` functionality to add in `ext` during the regular `grist-core` build: ``` docker buildx build --load -t gristlabs/grist-ee --build-context=ext=../ext . ``` Incremental builds in our monorepo are achieved with the `build_core.sh` helper, like: ``` buildtools/build_core.sh /tmp/self-managed cd /tmp/self-managed yarn start ``` The initial `ext` directory contains material for snapshotting to S3. If you build the docker image as above, and have S3 access, you can do something like: ``` docker run -p 8484:8484 --env GRIST_SESSION_SECRET=a-secret \ --env GRIST_DOCS_S3_BUCKET=grist-docs-test \ --env GRIST_DOCS_S3_PREFIX=self-managed \ -v $HOME/.aws:/root/.aws -it gristlabs/grist-ee ``` This will start a version of Grist that is like `grist-core` but with S3 snapshots enabled. To release this code to `grist-core`, it would just need to move from `ext/app` to `app` within core. I tried a lot of ways of organizing self-managed Grist, and this was what made me happiest. There are a lot of trade-offs, but here is what I was looking for: * Only OSS-code in grist-core. Adding mixed-license material there feels unfair to people already working with the repo. That said, a possible future is to move away from our private monorepo to a public mixed-licence repo, which could have the same relationship with grist-core as the monorepo has. * Minimal differences between self-managed builds and one of our existing builds, ideally hewing as close to grist-core as possible for ease of documentation, debugging, and maintenance. * Ideally, docker builds without copying files around (the new `--build-context` functionality made that possible). * Compatibility with monorepo build. Expressing dependencies of the extra code in `ext` proved tricky to do in a clean way. Yarn/npm fought me every step of the way - everything related to optional dependencies was unsatisfactory in some respect. Yarn2 is flexible but smells like it might be overreach. In the end, organizing to install non-core dependencies one directory up from the main build was a good simple trick that saved my bacon. This diff gets us to the point of building `grist-ee` images conveniently, but there isn't a public repo people can go look at to see its source. This could be generated by taking `grist-core`, adding the `ext` directory to it, and pushing to a distinct repository. I'm not in a hurry to do that, since a PR to that repo would be hard to sync with our monorepo and `grist-core`. Also, we don't have any licensing text ready for the `ext` directory. So leaving that for future work. Test Plan: manual Reviewers: georgegevoian, alexmojaki Reviewed By: georgegevoian, alexmojaki Differential Revision: https://phab.getgrist.com/D3415
2 years ago
################################################################################
## The Grist source can be extended. This is a stub that can be overridden
## from command line, as:
## docker buildx build -t ... --build-context=ext=<path> .
## The code in <path> will then be built along with the rest of Grist.
################################################################################
FROM scratch as ext
################################################################################
## Javascript build stage
################################################################################
FROM node:18-buster as builder
# Install all node dependencies.
(core) add machinery for self-managed flavor of Grist Summary: Currently, we have two ways that we deliver Grist. One is grist-core, which has simple defaults and is relatively easy for third parties to deploy. The second is our internal build for our SaaS, which is the opposite. For self-managed Grist, a planned paid on-premise version of Grist, I adopt the following approach: * Use the `grist-core` build mechanism, extending it to accept an overlay of extra code if present. * Extra code is supplied in a self-contained `ext` directory, with an `ext/app` directory that is of same structure as core `app` and `stubs/app`. * The `ext` directory also contains information about extra node dependencies needed beyond that of `grist-core`. * The `ext` directory is contained within our monorepo rather than `grist-core` since it may contain material not under the Apache license. Docker builds are achieved in our monorepo by using the `--build-context` functionality to add in `ext` during the regular `grist-core` build: ``` docker buildx build --load -t gristlabs/grist-ee --build-context=ext=../ext . ``` Incremental builds in our monorepo are achieved with the `build_core.sh` helper, like: ``` buildtools/build_core.sh /tmp/self-managed cd /tmp/self-managed yarn start ``` The initial `ext` directory contains material for snapshotting to S3. If you build the docker image as above, and have S3 access, you can do something like: ``` docker run -p 8484:8484 --env GRIST_SESSION_SECRET=a-secret \ --env GRIST_DOCS_S3_BUCKET=grist-docs-test \ --env GRIST_DOCS_S3_PREFIX=self-managed \ -v $HOME/.aws:/root/.aws -it gristlabs/grist-ee ``` This will start a version of Grist that is like `grist-core` but with S3 snapshots enabled. To release this code to `grist-core`, it would just need to move from `ext/app` to `app` within core. I tried a lot of ways of organizing self-managed Grist, and this was what made me happiest. There are a lot of trade-offs, but here is what I was looking for: * Only OSS-code in grist-core. Adding mixed-license material there feels unfair to people already working with the repo. That said, a possible future is to move away from our private monorepo to a public mixed-licence repo, which could have the same relationship with grist-core as the monorepo has. * Minimal differences between self-managed builds and one of our existing builds, ideally hewing as close to grist-core as possible for ease of documentation, debugging, and maintenance. * Ideally, docker builds without copying files around (the new `--build-context` functionality made that possible). * Compatibility with monorepo build. Expressing dependencies of the extra code in `ext` proved tricky to do in a clean way. Yarn/npm fought me every step of the way - everything related to optional dependencies was unsatisfactory in some respect. Yarn2 is flexible but smells like it might be overreach. In the end, organizing to install non-core dependencies one directory up from the main build was a good simple trick that saved my bacon. This diff gets us to the point of building `grist-ee` images conveniently, but there isn't a public repo people can go look at to see its source. This could be generated by taking `grist-core`, adding the `ext` directory to it, and pushing to a distinct repository. I'm not in a hurry to do that, since a PR to that repo would be hard to sync with our monorepo and `grist-core`. Also, we don't have any licensing text ready for the `ext` directory. So leaving that for future work. Test Plan: manual Reviewers: georgegevoian, alexmojaki Reviewed By: georgegevoian, alexmojaki Differential Revision: https://phab.getgrist.com/D3415
2 years ago
WORKDIR /grist
COPY package.json yarn.lock /grist/
RUN yarn install --frozen-lockfile --verbose --network-timeout 600000
(core) add machinery for self-managed flavor of Grist Summary: Currently, we have two ways that we deliver Grist. One is grist-core, which has simple defaults and is relatively easy for third parties to deploy. The second is our internal build for our SaaS, which is the opposite. For self-managed Grist, a planned paid on-premise version of Grist, I adopt the following approach: * Use the `grist-core` build mechanism, extending it to accept an overlay of extra code if present. * Extra code is supplied in a self-contained `ext` directory, with an `ext/app` directory that is of same structure as core `app` and `stubs/app`. * The `ext` directory also contains information about extra node dependencies needed beyond that of `grist-core`. * The `ext` directory is contained within our monorepo rather than `grist-core` since it may contain material not under the Apache license. Docker builds are achieved in our monorepo by using the `--build-context` functionality to add in `ext` during the regular `grist-core` build: ``` docker buildx build --load -t gristlabs/grist-ee --build-context=ext=../ext . ``` Incremental builds in our monorepo are achieved with the `build_core.sh` helper, like: ``` buildtools/build_core.sh /tmp/self-managed cd /tmp/self-managed yarn start ``` The initial `ext` directory contains material for snapshotting to S3. If you build the docker image as above, and have S3 access, you can do something like: ``` docker run -p 8484:8484 --env GRIST_SESSION_SECRET=a-secret \ --env GRIST_DOCS_S3_BUCKET=grist-docs-test \ --env GRIST_DOCS_S3_PREFIX=self-managed \ -v $HOME/.aws:/root/.aws -it gristlabs/grist-ee ``` This will start a version of Grist that is like `grist-core` but with S3 snapshots enabled. To release this code to `grist-core`, it would just need to move from `ext/app` to `app` within core. I tried a lot of ways of organizing self-managed Grist, and this was what made me happiest. There are a lot of trade-offs, but here is what I was looking for: * Only OSS-code in grist-core. Adding mixed-license material there feels unfair to people already working with the repo. That said, a possible future is to move away from our private monorepo to a public mixed-licence repo, which could have the same relationship with grist-core as the monorepo has. * Minimal differences between self-managed builds and one of our existing builds, ideally hewing as close to grist-core as possible for ease of documentation, debugging, and maintenance. * Ideally, docker builds without copying files around (the new `--build-context` functionality made that possible). * Compatibility with monorepo build. Expressing dependencies of the extra code in `ext` proved tricky to do in a clean way. Yarn/npm fought me every step of the way - everything related to optional dependencies was unsatisfactory in some respect. Yarn2 is flexible but smells like it might be overreach. In the end, organizing to install non-core dependencies one directory up from the main build was a good simple trick that saved my bacon. This diff gets us to the point of building `grist-ee` images conveniently, but there isn't a public repo people can go look at to see its source. This could be generated by taking `grist-core`, adding the `ext` directory to it, and pushing to a distinct repository. I'm not in a hurry to do that, since a PR to that repo would be hard to sync with our monorepo and `grist-core`. Also, we don't have any licensing text ready for the `ext` directory. So leaving that for future work. Test Plan: manual Reviewers: georgegevoian, alexmojaki Reviewed By: georgegevoian, alexmojaki Differential Revision: https://phab.getgrist.com/D3415
2 years ago
# Install any extra node dependencies (at root level, to avoid having to wrestle
# with merging them).
COPY --from=ext / /grist/ext
RUN \
mkdir /node_modules && \
cd /grist/ext && \
{ if [ -e package.json ] ; then yarn install --frozen-lockfile --modules-folder=/node_modules --verbose --network-timeout 600000 ; fi }
# Build node code.
(core) add machinery for self-managed flavor of Grist Summary: Currently, we have two ways that we deliver Grist. One is grist-core, which has simple defaults and is relatively easy for third parties to deploy. The second is our internal build for our SaaS, which is the opposite. For self-managed Grist, a planned paid on-premise version of Grist, I adopt the following approach: * Use the `grist-core` build mechanism, extending it to accept an overlay of extra code if present. * Extra code is supplied in a self-contained `ext` directory, with an `ext/app` directory that is of same structure as core `app` and `stubs/app`. * The `ext` directory also contains information about extra node dependencies needed beyond that of `grist-core`. * The `ext` directory is contained within our monorepo rather than `grist-core` since it may contain material not under the Apache license. Docker builds are achieved in our monorepo by using the `--build-context` functionality to add in `ext` during the regular `grist-core` build: ``` docker buildx build --load -t gristlabs/grist-ee --build-context=ext=../ext . ``` Incremental builds in our monorepo are achieved with the `build_core.sh` helper, like: ``` buildtools/build_core.sh /tmp/self-managed cd /tmp/self-managed yarn start ``` The initial `ext` directory contains material for snapshotting to S3. If you build the docker image as above, and have S3 access, you can do something like: ``` docker run -p 8484:8484 --env GRIST_SESSION_SECRET=a-secret \ --env GRIST_DOCS_S3_BUCKET=grist-docs-test \ --env GRIST_DOCS_S3_PREFIX=self-managed \ -v $HOME/.aws:/root/.aws -it gristlabs/grist-ee ``` This will start a version of Grist that is like `grist-core` but with S3 snapshots enabled. To release this code to `grist-core`, it would just need to move from `ext/app` to `app` within core. I tried a lot of ways of organizing self-managed Grist, and this was what made me happiest. There are a lot of trade-offs, but here is what I was looking for: * Only OSS-code in grist-core. Adding mixed-license material there feels unfair to people already working with the repo. That said, a possible future is to move away from our private monorepo to a public mixed-licence repo, which could have the same relationship with grist-core as the monorepo has. * Minimal differences between self-managed builds and one of our existing builds, ideally hewing as close to grist-core as possible for ease of documentation, debugging, and maintenance. * Ideally, docker builds without copying files around (the new `--build-context` functionality made that possible). * Compatibility with monorepo build. Expressing dependencies of the extra code in `ext` proved tricky to do in a clean way. Yarn/npm fought me every step of the way - everything related to optional dependencies was unsatisfactory in some respect. Yarn2 is flexible but smells like it might be overreach. In the end, organizing to install non-core dependencies one directory up from the main build was a good simple trick that saved my bacon. This diff gets us to the point of building `grist-ee` images conveniently, but there isn't a public repo people can go look at to see its source. This could be generated by taking `grist-core`, adding the `ext` directory to it, and pushing to a distinct repository. I'm not in a hurry to do that, since a PR to that repo would be hard to sync with our monorepo and `grist-core`. Also, we don't have any licensing text ready for the `ext` directory. So leaving that for future work. Test Plan: manual Reviewers: georgegevoian, alexmojaki Reviewed By: georgegevoian, alexmojaki Differential Revision: https://phab.getgrist.com/D3415
2 years ago
COPY tsconfig.json /grist
COPY tsconfig-ext.json /grist
COPY test/tsconfig.json /grist/test/tsconfig.json
COPY test/chai-as-promised.js /grist/test/chai-as-promised.js
(core) add machinery for self-managed flavor of Grist Summary: Currently, we have two ways that we deliver Grist. One is grist-core, which has simple defaults and is relatively easy for third parties to deploy. The second is our internal build for our SaaS, which is the opposite. For self-managed Grist, a planned paid on-premise version of Grist, I adopt the following approach: * Use the `grist-core` build mechanism, extending it to accept an overlay of extra code if present. * Extra code is supplied in a self-contained `ext` directory, with an `ext/app` directory that is of same structure as core `app` and `stubs/app`. * The `ext` directory also contains information about extra node dependencies needed beyond that of `grist-core`. * The `ext` directory is contained within our monorepo rather than `grist-core` since it may contain material not under the Apache license. Docker builds are achieved in our monorepo by using the `--build-context` functionality to add in `ext` during the regular `grist-core` build: ``` docker buildx build --load -t gristlabs/grist-ee --build-context=ext=../ext . ``` Incremental builds in our monorepo are achieved with the `build_core.sh` helper, like: ``` buildtools/build_core.sh /tmp/self-managed cd /tmp/self-managed yarn start ``` The initial `ext` directory contains material for snapshotting to S3. If you build the docker image as above, and have S3 access, you can do something like: ``` docker run -p 8484:8484 --env GRIST_SESSION_SECRET=a-secret \ --env GRIST_DOCS_S3_BUCKET=grist-docs-test \ --env GRIST_DOCS_S3_PREFIX=self-managed \ -v $HOME/.aws:/root/.aws -it gristlabs/grist-ee ``` This will start a version of Grist that is like `grist-core` but with S3 snapshots enabled. To release this code to `grist-core`, it would just need to move from `ext/app` to `app` within core. I tried a lot of ways of organizing self-managed Grist, and this was what made me happiest. There are a lot of trade-offs, but here is what I was looking for: * Only OSS-code in grist-core. Adding mixed-license material there feels unfair to people already working with the repo. That said, a possible future is to move away from our private monorepo to a public mixed-licence repo, which could have the same relationship with grist-core as the monorepo has. * Minimal differences between self-managed builds and one of our existing builds, ideally hewing as close to grist-core as possible for ease of documentation, debugging, and maintenance. * Ideally, docker builds without copying files around (the new `--build-context` functionality made that possible). * Compatibility with monorepo build. Expressing dependencies of the extra code in `ext` proved tricky to do in a clean way. Yarn/npm fought me every step of the way - everything related to optional dependencies was unsatisfactory in some respect. Yarn2 is flexible but smells like it might be overreach. In the end, organizing to install non-core dependencies one directory up from the main build was a good simple trick that saved my bacon. This diff gets us to the point of building `grist-ee` images conveniently, but there isn't a public repo people can go look at to see its source. This could be generated by taking `grist-core`, adding the `ext` directory to it, and pushing to a distinct repository. I'm not in a hurry to do that, since a PR to that repo would be hard to sync with our monorepo and `grist-core`. Also, we don't have any licensing text ready for the `ext` directory. So leaving that for future work. Test Plan: manual Reviewers: georgegevoian, alexmojaki Reviewed By: georgegevoian, alexmojaki Differential Revision: https://phab.getgrist.com/D3415
2 years ago
COPY app /grist/app
COPY stubs /grist/stubs
COPY buildtools /grist/buildtools
RUN yarn run build:prod
################################################################################
## Python collection stage
################################################################################
# Fetch python3.11 and python2.7
FROM python:3.11-slim-buster as collector
# Install all python dependencies.
ADD sandbox/requirements.txt requirements.txt
ADD sandbox/requirements3.txt requirements3.txt
RUN \
apt update && \
apt install -y --no-install-recommends python2 python-pip python-setuptools \
build-essential libxml2-dev libxslt-dev python-dev zlib1g-dev && \
pip2 install wheel && \
pip2 install -r requirements.txt && \
pip3 install -r requirements3.txt
################################################################################
## Sandbox collection stage
################################################################################
# Fetch gvisor-based sandbox. Note, to enable it to run within default
# unprivileged docker, layers of protection that require privilege have
# been stripped away, see https://github.com/google/gvisor/issues/4371
FROM docker.io/gristlabs/gvisor-unprivileged:buster as sandbox
################################################################################
## Run-time stage
################################################################################
# Now, start preparing final image.
FROM node:18-buster-slim
# Install libexpat1, libsqlite3-0 for python3 library binary dependencies.
# Install pgrep for managing gvisor processes.
RUN \
apt-get update && \
apt-get install -y --no-install-recommends libexpat1 libsqlite3-0 procps && \
rm -rf /var/lib/apt/lists/*
# Keep all storage user may want to persist in a distinct directory
RUN mkdir -p /persist/docs
# Copy node files.
(core) add machinery for self-managed flavor of Grist Summary: Currently, we have two ways that we deliver Grist. One is grist-core, which has simple defaults and is relatively easy for third parties to deploy. The second is our internal build for our SaaS, which is the opposite. For self-managed Grist, a planned paid on-premise version of Grist, I adopt the following approach: * Use the `grist-core` build mechanism, extending it to accept an overlay of extra code if present. * Extra code is supplied in a self-contained `ext` directory, with an `ext/app` directory that is of same structure as core `app` and `stubs/app`. * The `ext` directory also contains information about extra node dependencies needed beyond that of `grist-core`. * The `ext` directory is contained within our monorepo rather than `grist-core` since it may contain material not under the Apache license. Docker builds are achieved in our monorepo by using the `--build-context` functionality to add in `ext` during the regular `grist-core` build: ``` docker buildx build --load -t gristlabs/grist-ee --build-context=ext=../ext . ``` Incremental builds in our monorepo are achieved with the `build_core.sh` helper, like: ``` buildtools/build_core.sh /tmp/self-managed cd /tmp/self-managed yarn start ``` The initial `ext` directory contains material for snapshotting to S3. If you build the docker image as above, and have S3 access, you can do something like: ``` docker run -p 8484:8484 --env GRIST_SESSION_SECRET=a-secret \ --env GRIST_DOCS_S3_BUCKET=grist-docs-test \ --env GRIST_DOCS_S3_PREFIX=self-managed \ -v $HOME/.aws:/root/.aws -it gristlabs/grist-ee ``` This will start a version of Grist that is like `grist-core` but with S3 snapshots enabled. To release this code to `grist-core`, it would just need to move from `ext/app` to `app` within core. I tried a lot of ways of organizing self-managed Grist, and this was what made me happiest. There are a lot of trade-offs, but here is what I was looking for: * Only OSS-code in grist-core. Adding mixed-license material there feels unfair to people already working with the repo. That said, a possible future is to move away from our private monorepo to a public mixed-licence repo, which could have the same relationship with grist-core as the monorepo has. * Minimal differences between self-managed builds and one of our existing builds, ideally hewing as close to grist-core as possible for ease of documentation, debugging, and maintenance. * Ideally, docker builds without copying files around (the new `--build-context` functionality made that possible). * Compatibility with monorepo build. Expressing dependencies of the extra code in `ext` proved tricky to do in a clean way. Yarn/npm fought me every step of the way - everything related to optional dependencies was unsatisfactory in some respect. Yarn2 is flexible but smells like it might be overreach. In the end, organizing to install non-core dependencies one directory up from the main build was a good simple trick that saved my bacon. This diff gets us to the point of building `grist-ee` images conveniently, but there isn't a public repo people can go look at to see its source. This could be generated by taking `grist-core`, adding the `ext` directory to it, and pushing to a distinct repository. I'm not in a hurry to do that, since a PR to that repo would be hard to sync with our monorepo and `grist-core`. Also, we don't have any licensing text ready for the `ext` directory. So leaving that for future work. Test Plan: manual Reviewers: georgegevoian, alexmojaki Reviewed By: georgegevoian, alexmojaki Differential Revision: https://phab.getgrist.com/D3415
2 years ago
COPY --from=builder /node_modules /node_modules
COPY --from=builder /grist/node_modules /grist/node_modules
COPY --from=builder /grist/_build /grist/_build
COPY --from=builder /grist/static /grist/static-built
# Copy python files.
COPY --from=collector /usr/bin/python2.7 /usr/bin/python2.7
COPY --from=collector /usr/lib/python2.7 /usr/lib/python2.7
COPY --from=collector /usr/local/lib/python2.7 /usr/local/lib/python2.7
COPY --from=collector /usr/local/bin/python3.11 /usr/bin/python3.11
COPY --from=collector /usr/local/lib/python3.11 /usr/local/lib/python3.11
COPY --from=collector /usr/local/lib/libpython3.11.* /usr/local/lib/
# Set default to python3
RUN \
ln -s /usr/bin/python3.11 /usr/bin/python && \
ln -s /usr/bin/python3.11 /usr/bin/python3 && \
ldconfig
# Copy runsc.
COPY --from=sandbox /runsc /usr/bin/runsc
# Add files needed for running server.
(core) add machinery for self-managed flavor of Grist Summary: Currently, we have two ways that we deliver Grist. One is grist-core, which has simple defaults and is relatively easy for third parties to deploy. The second is our internal build for our SaaS, which is the opposite. For self-managed Grist, a planned paid on-premise version of Grist, I adopt the following approach: * Use the `grist-core` build mechanism, extending it to accept an overlay of extra code if present. * Extra code is supplied in a self-contained `ext` directory, with an `ext/app` directory that is of same structure as core `app` and `stubs/app`. * The `ext` directory also contains information about extra node dependencies needed beyond that of `grist-core`. * The `ext` directory is contained within our monorepo rather than `grist-core` since it may contain material not under the Apache license. Docker builds are achieved in our monorepo by using the `--build-context` functionality to add in `ext` during the regular `grist-core` build: ``` docker buildx build --load -t gristlabs/grist-ee --build-context=ext=../ext . ``` Incremental builds in our monorepo are achieved with the `build_core.sh` helper, like: ``` buildtools/build_core.sh /tmp/self-managed cd /tmp/self-managed yarn start ``` The initial `ext` directory contains material for snapshotting to S3. If you build the docker image as above, and have S3 access, you can do something like: ``` docker run -p 8484:8484 --env GRIST_SESSION_SECRET=a-secret \ --env GRIST_DOCS_S3_BUCKET=grist-docs-test \ --env GRIST_DOCS_S3_PREFIX=self-managed \ -v $HOME/.aws:/root/.aws -it gristlabs/grist-ee ``` This will start a version of Grist that is like `grist-core` but with S3 snapshots enabled. To release this code to `grist-core`, it would just need to move from `ext/app` to `app` within core. I tried a lot of ways of organizing self-managed Grist, and this was what made me happiest. There are a lot of trade-offs, but here is what I was looking for: * Only OSS-code in grist-core. Adding mixed-license material there feels unfair to people already working with the repo. That said, a possible future is to move away from our private monorepo to a public mixed-licence repo, which could have the same relationship with grist-core as the monorepo has. * Minimal differences between self-managed builds and one of our existing builds, ideally hewing as close to grist-core as possible for ease of documentation, debugging, and maintenance. * Ideally, docker builds without copying files around (the new `--build-context` functionality made that possible). * Compatibility with monorepo build. Expressing dependencies of the extra code in `ext` proved tricky to do in a clean way. Yarn/npm fought me every step of the way - everything related to optional dependencies was unsatisfactory in some respect. Yarn2 is flexible but smells like it might be overreach. In the end, organizing to install non-core dependencies one directory up from the main build was a good simple trick that saved my bacon. This diff gets us to the point of building `grist-ee` images conveniently, but there isn't a public repo people can go look at to see its source. This could be generated by taking `grist-core`, adding the `ext` directory to it, and pushing to a distinct repository. I'm not in a hurry to do that, since a PR to that repo would be hard to sync with our monorepo and `grist-core`. Also, we don't have any licensing text ready for the `ext` directory. So leaving that for future work. Test Plan: manual Reviewers: georgegevoian, alexmojaki Reviewed By: georgegevoian, alexmojaki Differential Revision: https://phab.getgrist.com/D3415
2 years ago
ADD package.json /grist/package.json
ADD bower_components /grist/bower_components
ADD sandbox /grist/sandbox
ADD plugins /grist/plugins
ADD static /grist/static
# Finalize static directory
RUN \
mv /grist/static-built/* /grist/static && \
rmdir /grist/static-built
WORKDIR /grist
# Set some default environment variables to give a setup that works out of the box when
# started as:
# docker run -p 8484:8484 -it <image>
# Variables will need to be overridden for other setups.
#
# GRIST_SANDBOX_FLAVOR is set to unsandboxed by default, because it
# appears that the services people use to run docker containers have
# a wide variety of security settings and the functionality needed for
# sandboxing may not be possible in every case. For default docker
# settings, you can get sandboxing as follows:
# docker run --env GRIST_SANDBOX_FLAVOR=gvisor -p 8484:8484 -it <image>
#
ENV \
PYTHON_VERSION_ON_CREATION=3 \
GRIST_ORG_IN_PATH=true \
GRIST_HOST=0.0.0.0 \
GRIST_SINGLE_PORT=true \
GRIST_SERVE_SAME_ORIGIN=true \
GRIST_DATA_DIR=/persist/docs \
GRIST_INST_DIR=/persist \
GRIST_SESSION_COOKIE=grist_core \
GVISOR_FLAGS="-unprivileged -ignore-cgroups" \
GRIST_SANDBOX_FLAVOR=unsandboxed \
TYPEORM_DATABASE=/persist/home.sqlite3
EXPOSE 8484
CMD ["./sandbox/run.sh"]