(core) API reworked to use POST to create webhook and DELET to remove it
Summary:
introduces POST /api/docs/{docId}/webhooks and DELETE /api/docs/{docId}/webhooks/{webhookId} on place of old _subscribe and _unsubscribe endpoints.
Remove checking for unsubscribeKey while deleting webhook - only owner can delete webhook using DELETE endpoint. subscription key is still needed for _unsubscribe endpoint.
old _unsubscribe and _subscribe endpoints are still active and work as before - no changes there.
Posting schema:
```
POST /api/docs/[docId]/webhooks
```
Request Body:
```
{
"webhooks": [
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb1",
"eventTypes": [
"add",
"update"
],
"enabled": true,
"name": "WebhookName",
"memo": "just a text",
"tableId": "Table1"
}
},
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb2",
"eventTypes": [
"add",
],
"enabled": true,
"name": "OtherWebhookName",
"memo": "just a text",
"tableId": "Table1"
}
}
]
}
```
Expected response: WebhookId for each webhook posted:
```
{
"webhooks": [
{
"id": "85c77108-f1e1-4217-a50d-acd1c5996da2"
},
{
"id": "d87a6402-cfd7-4822-878c-657308fcc8c3"
}
]
}
```
Deleting webhooks:
```
DELETE api/docs/[docId]/webhooks/[webhookId]
```
there is no payload in DELETE request. Therefore only one webhook can be deleted at once
Response:
```
{
"success": true
}
```
Test Plan: Old unit test improved to handle new endpoints, and one more added to check if endpoints are in fact created/removed
Reviewers: alexmojaki
Reviewed By: alexmojaki
Subscribers: paulfitz, alexmojaki
Differential Revision: https://phab.getgrist.com/D3916
2023-07-14 10:05:22 +00:00
|
|
|
import {concatenateSummaries, summarizeAction} from "app/common/ActionSummarizer";
|
2022-03-24 10:59:47 +00:00
|
|
|
import {createEmptyActionSummary} from "app/common/ActionSummary";
|
2023-12-12 09:58:20 +00:00
|
|
|
import {QueryFilters} from 'app/common/ActiveDocAPI';
|
2023-07-05 15:36:45 +00:00
|
|
|
import {ApiError, LimitType} from 'app/common/ApiError';
|
2022-03-24 10:59:47 +00:00
|
|
|
import {BrowserSettings} from "app/common/BrowserSettings";
|
2023-08-11 13:12:43 +00:00
|
|
|
import {
|
|
|
|
BulkColValues,
|
|
|
|
ColValues,
|
|
|
|
fromTableDataAction,
|
|
|
|
TableColValues,
|
|
|
|
TableRecordValue,
|
|
|
|
UserAction
|
|
|
|
} from 'app/common/DocActions';
|
2024-01-12 17:35:24 +00:00
|
|
|
import {DocData} from 'app/common/DocData';
|
2024-04-11 06:50:30 +00:00
|
|
|
import {
|
|
|
|
extractTypeFromColType,
|
|
|
|
getReferencedTableId,
|
|
|
|
isBlankValue,
|
|
|
|
isFullReferencingType,
|
|
|
|
isRaisedException,
|
|
|
|
} from "app/common/gristTypes";
|
2024-02-21 19:22:01 +00:00
|
|
|
import {INITIAL_FIELDS_COUNT} from "app/common/Forms";
|
|
|
|
import {buildUrlId, parseUrlId, SHARE_KEY_PREFIX} from "app/common/gristUrls";
|
2023-12-12 09:58:20 +00:00
|
|
|
import {isAffirmative, safeJsonParse, timeoutReached} from "app/common/gutil";
|
2023-03-01 20:43:22 +00:00
|
|
|
import {SchemaTypes} from "app/common/schema";
|
2022-03-24 10:59:47 +00:00
|
|
|
import {SortFunc} from 'app/common/SortFunc';
|
|
|
|
import {Sort} from 'app/common/SortSpec';
|
2022-05-20 11:50:22 +00:00
|
|
|
import {MetaRowRecord} from 'app/common/TableData';
|
2023-09-13 04:33:32 +00:00
|
|
|
import {TelemetryMetadataByLevel} from "app/common/Telemetry";
|
(core) API reworked to use POST to create webhook and DELET to remove it
Summary:
introduces POST /api/docs/{docId}/webhooks and DELETE /api/docs/{docId}/webhooks/{webhookId} on place of old _subscribe and _unsubscribe endpoints.
Remove checking for unsubscribeKey while deleting webhook - only owner can delete webhook using DELETE endpoint. subscription key is still needed for _unsubscribe endpoint.
old _unsubscribe and _subscribe endpoints are still active and work as before - no changes there.
Posting schema:
```
POST /api/docs/[docId]/webhooks
```
Request Body:
```
{
"webhooks": [
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb1",
"eventTypes": [
"add",
"update"
],
"enabled": true,
"name": "WebhookName",
"memo": "just a text",
"tableId": "Table1"
}
},
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb2",
"eventTypes": [
"add",
],
"enabled": true,
"name": "OtherWebhookName",
"memo": "just a text",
"tableId": "Table1"
}
}
]
}
```
Expected response: WebhookId for each webhook posted:
```
{
"webhooks": [
{
"id": "85c77108-f1e1-4217-a50d-acd1c5996da2"
},
{
"id": "d87a6402-cfd7-4822-878c-657308fcc8c3"
}
]
}
```
Deleting webhooks:
```
DELETE api/docs/[docId]/webhooks/[webhookId]
```
there is no payload in DELETE request. Therefore only one webhook can be deleted at once
Response:
```
{
"success": true
}
```
Test Plan: Old unit test improved to handle new endpoints, and one more added to check if endpoints are in fact created/removed
Reviewers: alexmojaki
Reviewed By: alexmojaki
Subscribers: paulfitz, alexmojaki
Differential Revision: https://phab.getgrist.com/D3916
2023-07-14 10:05:22 +00:00
|
|
|
import {WebhookFields} from "app/common/Triggers";
|
2023-03-01 20:43:22 +00:00
|
|
|
import TriggersTI from 'app/common/Triggers-ti';
|
(core) API reworked to use POST to create webhook and DELET to remove it
Summary:
introduces POST /api/docs/{docId}/webhooks and DELETE /api/docs/{docId}/webhooks/{webhookId} on place of old _subscribe and _unsubscribe endpoints.
Remove checking for unsubscribeKey while deleting webhook - only owner can delete webhook using DELETE endpoint. subscription key is still needed for _unsubscribe endpoint.
old _unsubscribe and _subscribe endpoints are still active and work as before - no changes there.
Posting schema:
```
POST /api/docs/[docId]/webhooks
```
Request Body:
```
{
"webhooks": [
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb1",
"eventTypes": [
"add",
"update"
],
"enabled": true,
"name": "WebhookName",
"memo": "just a text",
"tableId": "Table1"
}
},
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb2",
"eventTypes": [
"add",
],
"enabled": true,
"name": "OtherWebhookName",
"memo": "just a text",
"tableId": "Table1"
}
}
]
}
```
Expected response: WebhookId for each webhook posted:
```
{
"webhooks": [
{
"id": "85c77108-f1e1-4217-a50d-acd1c5996da2"
},
{
"id": "d87a6402-cfd7-4822-878c-657308fcc8c3"
}
]
}
```
Deleting webhooks:
```
DELETE api/docs/[docId]/webhooks/[webhookId]
```
there is no payload in DELETE request. Therefore only one webhook can be deleted at once
Response:
```
{
"success": true
}
```
Test Plan: Old unit test improved to handle new endpoints, and one more added to check if endpoints are in fact created/removed
Reviewers: alexmojaki
Reviewed By: alexmojaki
Subscribers: paulfitz, alexmojaki
Differential Revision: https://phab.getgrist.com/D3916
2023-07-14 10:05:22 +00:00
|
|
|
import {DocReplacementOptions, DocState, DocStateComparison, DocStates, NEW_DOCUMENT_CODE} from 'app/common/UserAPI';
|
2023-03-22 13:48:50 +00:00
|
|
|
import {HomeDBManager, makeDocAuthResult} from 'app/gen-server/lib/HomeDBManager';
|
2022-03-24 10:59:47 +00:00
|
|
|
import * as Types from "app/plugin/DocApiTypes";
|
|
|
|
import DocApiTypesTI from "app/plugin/DocApiTypes-ti";
|
2023-03-01 20:43:22 +00:00
|
|
|
import {GristObjCode} from "app/plugin/GristData";
|
2021-10-15 09:31:13 +00:00
|
|
|
import GristDataTI from 'app/plugin/GristData-ti';
|
2022-03-24 10:59:47 +00:00
|
|
|
import {OpOptions} from "app/plugin/TableOperations";
|
|
|
|
import {
|
|
|
|
handleSandboxErrorOnPlatform,
|
|
|
|
TableOperationsImpl,
|
|
|
|
TableOperationsPlatform
|
|
|
|
} from 'app/plugin/TableOperationsImpl';
|
2023-10-12 17:32:22 +00:00
|
|
|
import {ActiveDoc, colIdToRef as colIdToReference, getRealTableId, tableIdToRef} from "app/server/lib/ActiveDoc";
|
2023-09-04 13:21:18 +00:00
|
|
|
import {appSettings} from "app/server/lib/AppSettings";
|
(core) API reworked to use POST to create webhook and DELET to remove it
Summary:
introduces POST /api/docs/{docId}/webhooks and DELETE /api/docs/{docId}/webhooks/{webhookId} on place of old _subscribe and _unsubscribe endpoints.
Remove checking for unsubscribeKey while deleting webhook - only owner can delete webhook using DELETE endpoint. subscription key is still needed for _unsubscribe endpoint.
old _unsubscribe and _subscribe endpoints are still active and work as before - no changes there.
Posting schema:
```
POST /api/docs/[docId]/webhooks
```
Request Body:
```
{
"webhooks": [
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb1",
"eventTypes": [
"add",
"update"
],
"enabled": true,
"name": "WebhookName",
"memo": "just a text",
"tableId": "Table1"
}
},
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb2",
"eventTypes": [
"add",
],
"enabled": true,
"name": "OtherWebhookName",
"memo": "just a text",
"tableId": "Table1"
}
}
]
}
```
Expected response: WebhookId for each webhook posted:
```
{
"webhooks": [
{
"id": "85c77108-f1e1-4217-a50d-acd1c5996da2"
},
{
"id": "d87a6402-cfd7-4822-878c-657308fcc8c3"
}
]
}
```
Deleting webhooks:
```
DELETE api/docs/[docId]/webhooks/[webhookId]
```
there is no payload in DELETE request. Therefore only one webhook can be deleted at once
Response:
```
{
"success": true
}
```
Test Plan: Old unit test improved to handle new endpoints, and one more added to check if endpoints are in fact created/removed
Reviewers: alexmojaki
Reviewed By: alexmojaki
Subscribers: paulfitz, alexmojaki
Differential Revision: https://phab.getgrist.com/D3916
2023-07-14 10:05:22 +00:00
|
|
|
import {sendForCompletion} from 'app/server/lib/Assistance';
|
2022-03-24 10:59:47 +00:00
|
|
|
import {
|
|
|
|
assertAccess,
|
2023-09-06 18:35:46 +00:00
|
|
|
getAuthorizedUserId,
|
2022-03-24 10:59:47 +00:00
|
|
|
getOrSetDocAuth,
|
|
|
|
getTransitiveHeaders,
|
|
|
|
getUserId,
|
|
|
|
isAnonymousUser,
|
|
|
|
RequestWithLogin
|
|
|
|
} from 'app/server/lib/Authorizer';
|
|
|
|
import {DocManager} from "app/server/lib/DocManager";
|
2024-01-12 17:35:24 +00:00
|
|
|
import {docSessionFromRequest, getDocSessionShare, makeExceptionalDocSession,
|
|
|
|
OptDocSession} from "app/server/lib/DocSession";
|
2022-03-24 10:59:47 +00:00
|
|
|
import {DocWorker} from "app/server/lib/DocWorker";
|
|
|
|
import {IDocWorkerMap} from "app/server/lib/DocWorkerMap";
|
2022-10-20 19:24:14 +00:00
|
|
|
import {DownloadOptions, parseExportParameters} from "app/server/lib/Export";
|
2024-03-20 13:58:24 +00:00
|
|
|
import {downloadDSV} from "app/server/lib/ExportDSV";
|
2023-03-16 21:37:24 +00:00
|
|
|
import {collectTableSchemaInFrictionlessFormat} from "app/server/lib/ExportTableSchema";
|
2023-09-09 18:50:32 +00:00
|
|
|
import {streamXLSX} from "app/server/lib/ExportXLSX";
|
2022-03-24 10:59:47 +00:00
|
|
|
import {expressWrap} from 'app/server/lib/expressWrap';
|
|
|
|
import {filterDocumentInPlace} from "app/server/lib/filterUtils";
|
|
|
|
import {googleAuthTokenMiddleware} from "app/server/lib/GoogleAuth";
|
|
|
|
import {exportToDrive} from "app/server/lib/GoogleExport";
|
|
|
|
import {GristServer} from 'app/server/lib/GristServer';
|
|
|
|
import {HashUtil} from 'app/server/lib/HashUtil';
|
|
|
|
import {makeForkIds} from "app/server/lib/idUtils";
|
2023-03-01 20:43:22 +00:00
|
|
|
import log from 'app/server/lib/log';
|
2020-12-18 17:37:16 +00:00
|
|
|
import {
|
2022-03-24 10:59:47 +00:00
|
|
|
getDocId,
|
|
|
|
getDocScope,
|
|
|
|
getScope,
|
|
|
|
integerParam,
|
|
|
|
isParameterOn,
|
2023-09-06 18:35:46 +00:00
|
|
|
optBooleanParam,
|
2022-07-06 22:36:09 +00:00
|
|
|
optIntegerParam,
|
2022-03-24 10:59:47 +00:00
|
|
|
optStringParam,
|
|
|
|
sendOkReply,
|
|
|
|
sendReply,
|
|
|
|
stringParam
|
|
|
|
} from 'app/server/lib/requestUtils';
|
|
|
|
import {ServerColumnGetters} from 'app/server/lib/ServerColumnGetters';
|
(core) Initial webhooks implementation
Summary:
See https://grist.quip.com/VKd3ASF99ezD/Outgoing-Webhooks
- 2 new DocApi endpoints: _subscribe and _unsubscribe, not meant to be user friendly or publicly documented. _unsubscribe should be given the response from _subscribe in the body, e.g:
```
$ curl -X POST -H "Authorization: Bearer 8fd4dc59ecb05ab29ae5a183c03101319b8e6ca9" "http://localhost:8080/api/docs/6WYa23FqWxGNe3AR6DLjCJ/tables/Table2/_subscribe" -H "Content-type: application/json" -d '{"url": "https://webhook.site/a916b526-8afc-46e6-aa8f-a625d0d83ec3", "eventTypes": ["add"], "isReadyColumn": "C"}'
{"unsubscribeKey":"3246f158-55b5-4fc7-baa5-093b75ffa86c","triggerId":2,"webhookId":"853b4bfa-9d39-4639-aa33-7d45354903c0"}
$ curl -X POST -H "Authorization: Bearer 8fd4dc59ecb05ab29ae5a183c03101319b8e6ca9" "http://localhost:8080/api/docs/6WYa23FqWxGNe3AR6DLjCJ/tables/Table2/_unsubscribe" -H "Content-type: application/json" -d '{"unsubscribeKey":"3246f158-55b5-4fc7-baa5-093b75ffa86c","triggerId":2,"webhookId":"853b4bfa-9d39-4639-aa33-7d45354903c0"}'
{"success":true}
```
- New DB entity Secret to hold the webhook URL and unsubscribe key
- New document metatable _grist_Triggers subscribes to table changes and points to a secret to use for a webhook
- New file Triggers.ts processes action summaries and uses the two new tables to send webhooks.
- Also went on a bit of a diversion and made a typesafe subclass of TableData for metatables.
I think this is essentially good enough for a first diff, to keep the diffs manageable and to talk about the overall structure. Future diffs can add tests and more robustness using redis etc. After this diff I can also start building the Zapier integration privately.
Test Plan: Tested manually: see curl commands in summary for an example. Payloads can be seen in https://webhook.site/#!/a916b526-8afc-46e6-aa8f-a625d0d83ec3/0b9fe335-33f7-49fe-b90b-2db5ba53382d/1 . Great site for testing webhooks btw.
Reviewers: dsagal, paulfitz
Reviewed By: paulfitz
Differential Revision: https://phab.getgrist.com/D3019
2021-09-22 23:06:23 +00:00
|
|
|
import {localeFromRequest} from "app/server/lib/ServerLocale";
|
2023-03-01 20:43:22 +00:00
|
|
|
import {isUrlAllowed, WebhookAction, WebHookSecret} from "app/server/lib/Triggers";
|
2023-09-06 18:35:46 +00:00
|
|
|
import {fetchDoc, globalUploadSet, handleOptionalUpload, handleUpload,
|
|
|
|
makeAccessId} from "app/server/lib/uploads";
|
2022-04-12 14:33:48 +00:00
|
|
|
import * as assert from 'assert';
|
2022-07-04 14:14:55 +00:00
|
|
|
import contentDisposition from 'content-disposition';
|
2022-03-24 10:59:47 +00:00
|
|
|
import {Application, NextFunction, Request, RequestHandler, Response} from "express";
|
(core) Initial webhooks implementation
Summary:
See https://grist.quip.com/VKd3ASF99ezD/Outgoing-Webhooks
- 2 new DocApi endpoints: _subscribe and _unsubscribe, not meant to be user friendly or publicly documented. _unsubscribe should be given the response from _subscribe in the body, e.g:
```
$ curl -X POST -H "Authorization: Bearer 8fd4dc59ecb05ab29ae5a183c03101319b8e6ca9" "http://localhost:8080/api/docs/6WYa23FqWxGNe3AR6DLjCJ/tables/Table2/_subscribe" -H "Content-type: application/json" -d '{"url": "https://webhook.site/a916b526-8afc-46e6-aa8f-a625d0d83ec3", "eventTypes": ["add"], "isReadyColumn": "C"}'
{"unsubscribeKey":"3246f158-55b5-4fc7-baa5-093b75ffa86c","triggerId":2,"webhookId":"853b4bfa-9d39-4639-aa33-7d45354903c0"}
$ curl -X POST -H "Authorization: Bearer 8fd4dc59ecb05ab29ae5a183c03101319b8e6ca9" "http://localhost:8080/api/docs/6WYa23FqWxGNe3AR6DLjCJ/tables/Table2/_unsubscribe" -H "Content-type: application/json" -d '{"unsubscribeKey":"3246f158-55b5-4fc7-baa5-093b75ffa86c","triggerId":2,"webhookId":"853b4bfa-9d39-4639-aa33-7d45354903c0"}'
{"success":true}
```
- New DB entity Secret to hold the webhook URL and unsubscribe key
- New document metatable _grist_Triggers subscribes to table changes and points to a secret to use for a webhook
- New file Triggers.ts processes action summaries and uses the two new tables to send webhooks.
- Also went on a bit of a diversion and made a typesafe subclass of TableData for metatables.
I think this is essentially good enough for a first diff, to keep the diffs manageable and to talk about the overall structure. Future diffs can add tests and more robustness using redis etc. After this diff I can also start building the Zapier integration privately.
Test Plan: Tested manually: see curl commands in summary for an example. Payloads can be seen in https://webhook.site/#!/a916b526-8afc-46e6-aa8f-a625d0d83ec3/0b9fe335-33f7-49fe-b90b-2db5ba53382d/1 . Great site for testing webhooks btw.
Reviewers: dsagal, paulfitz
Reviewed By: paulfitz
Differential Revision: https://phab.getgrist.com/D3019
2021-09-22 23:06:23 +00:00
|
|
|
import * as _ from "lodash";
|
2022-07-04 14:14:55 +00:00
|
|
|
import LRUCache from 'lru-cache';
|
2022-04-28 11:51:55 +00:00
|
|
|
import * as moment from 'moment';
|
2020-07-21 13:20:51 +00:00
|
|
|
import fetch from 'node-fetch';
|
|
|
|
import * as path from 'path';
|
2021-10-15 09:31:13 +00:00
|
|
|
import * as t from "ts-interface-checker";
|
2022-03-24 10:59:47 +00:00
|
|
|
import {Checker} from "ts-interface-checker";
|
2022-07-04 14:14:55 +00:00
|
|
|
import uuidv4 from "uuid/v4";
|
2024-03-06 17:12:42 +00:00
|
|
|
import { Document } from "app/gen-server/entity/Document";
|
2020-07-21 13:20:51 +00:00
|
|
|
|
|
|
|
// Cap on the number of requests that can be outstanding on a single document via the
|
|
|
|
// rest doc api. When this limit is exceeded, incoming requests receive an immediate
|
|
|
|
// reply with status 429.
|
|
|
|
const MAX_PARALLEL_REQUESTS_PER_DOC = 10;
|
|
|
|
|
2022-04-28 11:51:55 +00:00
|
|
|
// This is NOT the number of docs that can be handled at a time.
|
|
|
|
// It's a very generous upper bound of what that number might be.
|
|
|
|
// If there are more docs than this for which API requests are being regularly made at any moment,
|
|
|
|
// then the _dailyUsage cache may become unreliable and users may be able to exceed their allocated requests.
|
|
|
|
const MAX_ACTIVE_DOCS_USAGE_CACHE = 1000;
|
|
|
|
|
2023-11-14 13:46:33 +00:00
|
|
|
// Maximum amount of time that a webhook endpoint can hold the mutex for in withDocTriggersLock.
|
|
|
|
const MAX_DOC_TRIGGERS_LOCK_MS = 15_000;
|
|
|
|
|
2023-09-04 13:21:18 +00:00
|
|
|
// Maximum duration of a call to /sql. Does not apply to internal calls to SQLite.
|
|
|
|
const MAX_CUSTOM_SQL_MSEC = appSettings.section('integrations')
|
|
|
|
.section('sql').flag('timeout').requireInt({
|
|
|
|
envVar: 'GRIST_SQL_TIMEOUT_MSEC',
|
|
|
|
defaultValue: 1000,
|
|
|
|
});
|
|
|
|
|
2020-07-21 13:20:51 +00:00
|
|
|
type WithDocHandler = (activeDoc: ActiveDoc, req: RequestWithLogin, resp: Response) => Promise<void>;
|
|
|
|
|
2021-10-15 09:31:13 +00:00
|
|
|
// Schema validators for api endpoints that creates or updates records.
|
2022-10-20 19:24:14 +00:00
|
|
|
const {
|
|
|
|
RecordsPatch, RecordsPost, RecordsPut,
|
2023-08-11 13:12:43 +00:00
|
|
|
ColumnsPost, ColumnsPatch, ColumnsPut,
|
2023-09-04 13:21:18 +00:00
|
|
|
SqlPost,
|
2022-10-20 19:24:14 +00:00
|
|
|
TablesPost, TablesPatch,
|
|
|
|
} = t.createCheckers(DocApiTypesTI, GristDataTI);
|
|
|
|
|
2023-09-04 13:21:18 +00:00
|
|
|
for (const checker of [RecordsPatch, RecordsPost, RecordsPut, ColumnsPost, ColumnsPatch,
|
|
|
|
SqlPost, TablesPost, TablesPatch]) {
|
2022-10-20 19:24:14 +00:00
|
|
|
checker.setReportedPath("body");
|
|
|
|
}
|
2021-10-15 09:31:13 +00:00
|
|
|
|
2023-03-01 20:43:22 +00:00
|
|
|
// Schema validators for api endpoints that creates or updates records.
|
|
|
|
const {
|
|
|
|
WebhookPatch,
|
(core) API reworked to use POST to create webhook and DELET to remove it
Summary:
introduces POST /api/docs/{docId}/webhooks and DELETE /api/docs/{docId}/webhooks/{webhookId} on place of old _subscribe and _unsubscribe endpoints.
Remove checking for unsubscribeKey while deleting webhook - only owner can delete webhook using DELETE endpoint. subscription key is still needed for _unsubscribe endpoint.
old _unsubscribe and _subscribe endpoints are still active and work as before - no changes there.
Posting schema:
```
POST /api/docs/[docId]/webhooks
```
Request Body:
```
{
"webhooks": [
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb1",
"eventTypes": [
"add",
"update"
],
"enabled": true,
"name": "WebhookName",
"memo": "just a text",
"tableId": "Table1"
}
},
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb2",
"eventTypes": [
"add",
],
"enabled": true,
"name": "OtherWebhookName",
"memo": "just a text",
"tableId": "Table1"
}
}
]
}
```
Expected response: WebhookId for each webhook posted:
```
{
"webhooks": [
{
"id": "85c77108-f1e1-4217-a50d-acd1c5996da2"
},
{
"id": "d87a6402-cfd7-4822-878c-657308fcc8c3"
}
]
}
```
Deleting webhooks:
```
DELETE api/docs/[docId]/webhooks/[webhookId]
```
there is no payload in DELETE request. Therefore only one webhook can be deleted at once
Response:
```
{
"success": true
}
```
Test Plan: Old unit test improved to handle new endpoints, and one more added to check if endpoints are in fact created/removed
Reviewers: alexmojaki
Reviewed By: alexmojaki
Subscribers: paulfitz, alexmojaki
Differential Revision: https://phab.getgrist.com/D3916
2023-07-14 10:05:22 +00:00
|
|
|
WebhookSubscribe,
|
|
|
|
WebhookSubscribeCollection,
|
2023-03-01 20:43:22 +00:00
|
|
|
} = t.createCheckers(TriggersTI);
|
|
|
|
|
2021-10-15 09:31:13 +00:00
|
|
|
/**
|
|
|
|
* Middleware for validating request's body with a Checker instance.
|
|
|
|
*/
|
|
|
|
function validate(checker: Checker): RequestHandler {
|
|
|
|
return (req, res, next) => {
|
2023-05-08 22:06:24 +00:00
|
|
|
validateCore(checker, req, req.body);
|
|
|
|
next();
|
|
|
|
};
|
|
|
|
}
|
|
|
|
|
|
|
|
function validateCore(checker: Checker, req: Request, body: any) {
|
2021-10-15 09:31:13 +00:00
|
|
|
try {
|
2023-05-08 22:06:24 +00:00
|
|
|
checker.check(body);
|
2021-10-15 09:31:13 +00:00
|
|
|
} catch(err) {
|
2023-03-01 20:43:22 +00:00
|
|
|
log.warn(`Error during api call to ${req.path}: Invalid payload: ${String(err)}`);
|
2023-05-08 22:06:24 +00:00
|
|
|
throw new ApiError('Invalid payload', 400, {userError: String(err)});
|
2021-10-15 09:31:13 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-07-21 13:20:51 +00:00
|
|
|
export class DocWorkerApi {
|
2022-04-28 11:51:55 +00:00
|
|
|
// Map from docId to number of requests currently being handled for that doc
|
|
|
|
private _currentUsage = new Map<string, number>();
|
|
|
|
|
|
|
|
// Map from (docId, time period) combination produced by docPeriodicApiUsageKey
|
|
|
|
// to number of requests previously served for that combination.
|
|
|
|
// We multiply by 5 because there are 5 relevant keys per doc at any time (current/next day/hour and current minute).
|
|
|
|
private _dailyUsage = new LRUCache<string, number>({max: 5 * MAX_ACTIVE_DOCS_USAGE_CACHE});
|
|
|
|
|
(core) support GRIST_WORKER_GROUP to place worker into an exclusive group
Summary:
In an emergency, we may want to serve certain documents with "old" workers as we fix problems. This diff adds some support for that.
* Creates duplicate task definitions and services for staging and production doc workers (called grist-docs-staging2 and grist-docs-prod2), pulling from distinct docker tags (staging2 and prod2). The services are set to have zero workers until we need them.
* These new workers are started with a new env variable `GRIST_WORKER_GROUP` set to `secondary`.
* The `GRIST_WORKER_GROUP` variable, if set, makes the worker available to documents in the named group, and only that group.
* An unauthenticated `/assign` endpoint is added to documents which, when POSTed to, checks that the doc is served by a worker in the desired group for that doc (as set manually in redis), and if not frees the doc up for reassignment. This makes it possible to move individual docs between workers without redeployments.
The bash scripts added are a record of how the task definitions + services were created. The services could just have been copied manually, but the task definitions will need to be updated whenever the definitions for the main doc workers are updated, so it is worth scripting that.
For example, if a certain document were to fail on a new deployment of Grist, but rolling back the full deployment wasn't practical:
* Set prod2 tag in docker to desired codebase for that document
* Set desired_count for grist-docs-prod2 service to non-zero
* Set doc-<docid>-group for that doc in redis to secondary
* Hit /api/docs/<docid>/assign to move the doc to grist-docs-prod2
(If the document needs to be reverted to a previous snapshot, that currently would need doing manually - could be made simpler, but not in scope of this diff).
Test Plan: added tests
Reviewers: dsagal
Reviewed By: dsagal
Differential Revision: https://phab.getgrist.com/D2649
2020-11-02 19:24:46 +00:00
|
|
|
constructor(private _app: Application, private _docWorker: DocWorker,
|
|
|
|
private _docWorkerMap: IDocWorkerMap, private _docManager: DocManager,
|
2024-02-21 19:22:01 +00:00
|
|
|
private _dbManager: HomeDBManager, private _grist: GristServer) {}
|
2020-07-21 13:20:51 +00:00
|
|
|
|
|
|
|
/**
|
|
|
|
* Adds endpoints for the doc api.
|
|
|
|
*
|
|
|
|
* Note that it expects bodyParser, userId, and jsonErrorHandler middleware to be set up outside
|
|
|
|
* to apply to these routes.
|
|
|
|
*/
|
|
|
|
public addEndpoints() {
|
(core) add initial support for special shares
Summary:
This gives a mechanism for controlling access control within a document that is distinct from (though implemented with the same machinery as) granular access rules.
It was hard to find a good way to insert this that didn't dissolve in a soup of complications, so here's what I went with:
* When reading rules, if there are shares, extra rules are added.
* If there are shares, all rules are made conditional on a "ShareRef" user property.
* "ShareRef" is null when a doc is accessed in normal way, and the row id of a share when accessed via a share.
There's no UI for controlling shares (George is working on it for forms), but you can do it by editing a `_grist_Shares` table in a document. Suppose you make a fresh document with a single page/table/widget, then to create an empty share you can do:
```
gristDocPageModel.gristDoc.get().docData.sendAction(['AddRecord', '_grist_Shares', null, {linkId: 'xyz', options: '{"publish": true}'}])
```
If you look at the home db now there should be something in the `shares` table:
```
$ sqlite3 -table landing.db "select * from shares"
+----+------------------------+------------------------+--------------+---------+
| id | key | doc_id | link_id | options |
+----+------------------------+------------------------+--------------+---------+
| 1 | gSL4g38PsyautLHnjmXh2K | 4qYuace1xP2CTcPunFdtan | xyz | ... |
+----+------------------------+------------------------+--------------+---------+
```
If you take the key from that (gSL4g38PsyautLHnjmXh2K in this case) and replace the document's urlId in its URL with `s.<key>` (in this case `s.gSL4g38PsyautLHnjmXh2K` then you can use the regular document landing page (it will be quite blank initially) or API endpoint via the share.
E.g. for me `http://localhost:8080/o/docs/s0gSL4g38PsyautLHnjmXh2K/share-inter-3` accesses the doc.
To actually share some material - useful commands:
```
gristDocPageModel.gristDoc.get().docData.getMetaTable('_grist_Views_section').getRecords()
gristDocPageModel.gristDoc.get().docData.sendAction(['UpdateRecord', '_grist_Views_section', 1, {shareOptions: '{"publish": true, "form": true}'}])
gristDocPageModel.gristDoc.get().docData.getMetaTable('_grist_Pages').getRecords()
gristDocPageModel.gristDoc.get().docData.sendAction(['UpdateRecord', '_grist_Pages', 1, {shareRef: 1}])
```
For a share to be effective, at least one page needs to have its shareRef set to the rowId of the share, and at least one widget on one of those pages needs to have its shareOptions set to {"publish": "true", "form": "true"} (meaning turn on sharing, and include form sharing), and the share itself needs {"publish": true} on its options.
I think special shares are kind of incompatible with public sharing, since by their nature (allowing access to all endpoints) they easily expose the docId, and changing that would be hard.
Test Plan: tests added
Reviewers: dsagal, georgegevoian
Reviewed By: dsagal, georgegevoian
Subscribers: jarek, dsagal
Differential Revision: https://phab.getgrist.com/D4144
2024-01-03 16:53:20 +00:00
|
|
|
this._app.use((req, res, next) => {
|
|
|
|
if (req.url.startsWith('/api/s/')) {
|
|
|
|
req.url = req.url.replace('/api/s/', `/api/docs/${SHARE_KEY_PREFIX}`);
|
|
|
|
}
|
|
|
|
next();
|
|
|
|
});
|
2020-07-21 13:20:51 +00:00
|
|
|
|
|
|
|
// check document exists (not soft deleted) and user can view it
|
|
|
|
const canView = expressWrap(this._assertAccess.bind(this, 'viewers', false));
|
|
|
|
// check document exists (not soft deleted) and user can edit it
|
|
|
|
const canEdit = expressWrap(this._assertAccess.bind(this, 'editors', false));
|
2023-09-08 13:05:52 +00:00
|
|
|
const checkAnonymousCreation = expressWrap(this._checkAnonymousCreation.bind(this));
|
2020-12-18 17:37:16 +00:00
|
|
|
const isOwner = expressWrap(this._assertAccess.bind(this, 'owners', false));
|
2020-07-21 13:20:51 +00:00
|
|
|
// check user can edit document, with soft-deleted documents being acceptable
|
|
|
|
const canEditMaybeRemoved = expressWrap(this._assertAccess.bind(this, 'editors', true));
|
2021-07-21 08:46:03 +00:00
|
|
|
// converts google code to access token and adds it to request object
|
|
|
|
const decodeGoogleToken = expressWrap(googleAuthTokenMiddleware.bind(null));
|
2023-07-05 15:36:45 +00:00
|
|
|
// check that limit can be increased by 1
|
|
|
|
const checkLimit = (type: LimitType) => expressWrap(this._checkLimit.bind(this, type));
|
2020-07-21 13:20:51 +00:00
|
|
|
|
|
|
|
// Middleware to limit number of outstanding requests per document. Will also
|
|
|
|
// handle errors like expressWrap would.
|
2022-03-21 20:22:35 +00:00
|
|
|
const throttled = this._apiThrottle.bind(this);
|
2023-11-14 13:46:33 +00:00
|
|
|
|
2020-07-21 13:20:51 +00:00
|
|
|
const withDoc = (callback: WithDocHandler) => throttled(this._requireActiveDoc(callback));
|
|
|
|
|
2023-11-14 13:46:33 +00:00
|
|
|
// Like withDoc, but only one such callback can run at a time per active doc.
|
|
|
|
// This is used for webhook endpoints to prevent simultaneous changes to configuration
|
|
|
|
// or clearing queues which could lead to weird problems.
|
|
|
|
const withDocTriggersLock = (callback: WithDocHandler) => withDoc(
|
|
|
|
async (activeDoc: ActiveDoc, req: RequestWithLogin, resp: Response) =>
|
|
|
|
await activeDoc.triggersLock.runExclusive(async () => {
|
|
|
|
// We don't want to hold the mutex indefinitely so that if one call gets stuck
|
|
|
|
// (especially while trying to apply user actions which are stalled by a full queue)
|
|
|
|
// another call which would clear a queue, disable a webhook, or fix something related
|
|
|
|
// can eventually succeed.
|
|
|
|
if (await timeoutReached(MAX_DOC_TRIGGERS_LOCK_MS, callback(activeDoc, req, resp), {rethrow: true})) {
|
|
|
|
log.rawError(`Webhook endpoint timed out, releasing mutex`,
|
|
|
|
{method: req.method, path: req.path, docId: activeDoc.docName});
|
|
|
|
}
|
|
|
|
})
|
|
|
|
);
|
|
|
|
|
2020-07-21 13:20:51 +00:00
|
|
|
// Apply user actions to a document.
|
|
|
|
this._app.post('/api/docs/:docId/apply', canEdit, withDoc(async (activeDoc, req, res) => {
|
2021-12-16 13:45:05 +00:00
|
|
|
const parseStrings = !isAffirmative(req.query.noparse);
|
|
|
|
res.json(await activeDoc.applyUserActions(docSessionFromRequest(req), req.body, {parseStrings}));
|
2020-07-21 13:20:51 +00:00
|
|
|
}));
|
|
|
|
|
2023-12-12 09:58:20 +00:00
|
|
|
|
|
|
|
async function readTable(
|
|
|
|
req: RequestWithLogin,
|
|
|
|
activeDoc: ActiveDoc,
|
|
|
|
tableId: string,
|
|
|
|
filters: QueryFilters,
|
|
|
|
params: QueryParameters & {immediate?: boolean}) {
|
(core) get all tests working under python3/gvisor
Summary:
This verifies that all existing tests are capable of running under python3/gvisor, and fixes the small issues that came up. It does not yet activate python3 tests on all diffs, only diffs that specifically request them.
* Adds a suffix in test names and output directories for tests run with PYTHON_VERSION=3, so that results of the same test run with and without the flag can be aggregated cleanly.
* Adds support for checkpointing to the gvisor sandbox adapter.
* Prepares a checkpoint made after grist python code has loaded in the gvisor sandbox.
* Changes how `DOC_URL` is passed to the sandbox, since it can no longer be passed in as an environment variable when using checkpoints.
* Uses the checkpoint to speed up tests using the gvisor sandbox, otherwise a lot of tests need more time (especially on mac under docker).
* Directs jenkins to run all tests with python2 and python3 when a new file `buildtools/changelogs/python.txt` is touched (this diff counts as touching that file).
* Tweaks miscellaneous tests
- some needed fixes exposed by slightly different timing
- a small number actually give different results in py3 (removal of `u` prefixes).
- some needed a little more time
The DOC_URL change is not the ultimate solution we want for DOC_URL. Eventually it should be a variable that gets updated, like the date perhaps. This is just a small pragmatic change to preserve existing behavior.
Tests are run mindlessly as py3, and for some tests it won't change anything (e.g. if they do not use NSandbox). Tests are not run in parallel, doubling overall test time.
Checkpoints could be useful in deployment, though this diff doesn't use them there.
The application of checkpoints doesn't check for other configuration like 3-versus-5-pipe that we don't actually use.
Python2 tests run using pynbox as always for now.
The diff got sufficiently bulky that I didn't tackle running py3 on "regular" diffs in it. My preference, given that most tests don't appear to stress the python side of things, would be to make a selection of the tests that do and a few wild cards, and run those tests on both pythons rather then all of them. For diffs making a significant python change, I'd propose touching buildtools/changelogs/python.txt for full tests. But this is a conversation in progress.
A total of 6886 tests ran on this diff.
Test Plan: this is a step in preparing tests for py3 transition
Reviewers: dsagal
Reviewed By: dsagal
Subscribers: dsagal
Differential Revision: https://phab.getgrist.com/D3066
2021-10-18 17:37:51 +00:00
|
|
|
// Option to skip waiting for document initialization.
|
2023-12-12 09:58:20 +00:00
|
|
|
const immediate = isAffirmative(params.immediate);
|
2020-07-21 13:20:51 +00:00
|
|
|
if (!Object.keys(filters).every(col => Array.isArray(filters[col]))) {
|
|
|
|
throw new ApiError("Invalid query: filter values must be arrays", 400);
|
|
|
|
}
|
2021-11-03 11:44:28 +00:00
|
|
|
const session = docSessionFromRequest(req);
|
2022-12-21 16:40:00 +00:00
|
|
|
const {tableData} = await handleSandboxError(tableId, [], activeDoc.fetchQuery(
|
2021-11-03 11:44:28 +00:00
|
|
|
session, {tableId, filters}, !immediate));
|
|
|
|
// For metaTables we don't need to specify columns, search will infer it from the sort expression.
|
|
|
|
const isMetaTable = tableId.startsWith('_grist');
|
|
|
|
const columns = isMetaTable ? null :
|
|
|
|
await handleSandboxError('', [], activeDoc.getTableCols(session, tableId, true));
|
2020-07-21 13:20:51 +00:00
|
|
|
// Apply sort/limit parameters, if set. TODO: move sorting/limiting into data engine
|
|
|
|
// and sql.
|
2021-11-03 11:44:28 +00:00
|
|
|
return applyQueryParameters(fromTableDataAction(tableData), params, columns);
|
2021-08-12 14:48:24 +00:00
|
|
|
}
|
|
|
|
|
2023-12-12 09:58:20 +00:00
|
|
|
async function getTableData(activeDoc: ActiveDoc, req: RequestWithLogin, optTableId?: string) {
|
|
|
|
const filters = req.query.filter ? JSON.parse(String(req.query.filter)) : {};
|
|
|
|
// Option to skip waiting for document initialization.
|
|
|
|
const immediate = isAffirmative(req.query.immediate);
|
|
|
|
const tableId = await getRealTableId(optTableId || req.params.tableId, {activeDoc, req});
|
|
|
|
const params = getQueryParameters(req);
|
|
|
|
return await readTable(req, activeDoc, tableId, filters, {...params, immediate});
|
|
|
|
}
|
|
|
|
|
|
|
|
function asRecords(
|
2024-04-11 06:50:30 +00:00
|
|
|
columnData: TableColValues,
|
|
|
|
opts?: {
|
|
|
|
optTableId?: string;
|
|
|
|
includeHidden?: boolean;
|
|
|
|
includeId?: boolean;
|
|
|
|
}
|
|
|
|
): TableRecordValue[] {
|
2023-08-14 14:17:46 +00:00
|
|
|
const fieldNames = Object.keys(columnData).filter((k) => {
|
2024-04-11 06:50:30 +00:00
|
|
|
if (!opts?.includeId && k === "id") {
|
2023-08-14 14:17:46 +00:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
if (
|
|
|
|
!opts?.includeHidden &&
|
|
|
|
(k === "manualSort" || k.startsWith("gristHelper_"))
|
|
|
|
) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
return true;
|
|
|
|
});
|
2022-05-20 11:50:22 +00:00
|
|
|
return columnData.id.map((id, index) => {
|
2023-08-14 14:17:46 +00:00
|
|
|
const result: TableRecordValue = { id, fields: {} };
|
2022-05-20 11:50:22 +00:00
|
|
|
for (const key of fieldNames) {
|
|
|
|
let value = columnData[key][index];
|
|
|
|
if (isRaisedException(value)) {
|
|
|
|
_.set(result, ["errors", key], (value as string[])[1]);
|
|
|
|
value = null;
|
|
|
|
}
|
|
|
|
result.fields[key] = value;
|
|
|
|
}
|
|
|
|
return result;
|
|
|
|
});
|
|
|
|
}
|
|
|
|
|
2023-12-12 09:58:20 +00:00
|
|
|
async function getTableRecords(
|
|
|
|
activeDoc: ActiveDoc, req: RequestWithLogin, opts?: { optTableId?: string; includeHidden?: boolean }
|
|
|
|
): Promise<TableRecordValue[]> {
|
|
|
|
const columnData = await getTableData(activeDoc, req, opts?.optTableId);
|
|
|
|
return asRecords(columnData, opts);
|
|
|
|
}
|
|
|
|
|
2021-08-12 14:48:24 +00:00
|
|
|
// Get the specified table in column-oriented format
|
|
|
|
this._app.get('/api/docs/:docId/tables/:tableId/data', canView,
|
|
|
|
withDoc(async (activeDoc, req, res) => {
|
|
|
|
res.json(await getTableData(activeDoc, req));
|
|
|
|
})
|
|
|
|
);
|
|
|
|
|
|
|
|
// Get the specified table in record-oriented format
|
|
|
|
this._app.get('/api/docs/:docId/tables/:tableId/records', canView,
|
|
|
|
withDoc(async (activeDoc, req, res) => {
|
2023-08-14 14:17:46 +00:00
|
|
|
const records = await getTableRecords(activeDoc, req,
|
|
|
|
{ includeHidden: isAffirmative(req.query.hidden) }
|
|
|
|
);
|
2021-08-12 14:48:24 +00:00
|
|
|
res.json({records});
|
|
|
|
})
|
|
|
|
);
|
2020-07-21 13:20:51 +00:00
|
|
|
|
(core) API reworked to use POST to create webhook and DELET to remove it
Summary:
introduces POST /api/docs/{docId}/webhooks and DELETE /api/docs/{docId}/webhooks/{webhookId} on place of old _subscribe and _unsubscribe endpoints.
Remove checking for unsubscribeKey while deleting webhook - only owner can delete webhook using DELETE endpoint. subscription key is still needed for _unsubscribe endpoint.
old _unsubscribe and _subscribe endpoints are still active and work as before - no changes there.
Posting schema:
```
POST /api/docs/[docId]/webhooks
```
Request Body:
```
{
"webhooks": [
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb1",
"eventTypes": [
"add",
"update"
],
"enabled": true,
"name": "WebhookName",
"memo": "just a text",
"tableId": "Table1"
}
},
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb2",
"eventTypes": [
"add",
],
"enabled": true,
"name": "OtherWebhookName",
"memo": "just a text",
"tableId": "Table1"
}
}
]
}
```
Expected response: WebhookId for each webhook posted:
```
{
"webhooks": [
{
"id": "85c77108-f1e1-4217-a50d-acd1c5996da2"
},
{
"id": "d87a6402-cfd7-4822-878c-657308fcc8c3"
}
]
}
```
Deleting webhooks:
```
DELETE api/docs/[docId]/webhooks/[webhookId]
```
there is no payload in DELETE request. Therefore only one webhook can be deleted at once
Response:
```
{
"success": true
}
```
Test Plan: Old unit test improved to handle new endpoints, and one more added to check if endpoints are in fact created/removed
Reviewers: alexmojaki
Reviewed By: alexmojaki
Subscribers: paulfitz, alexmojaki
Differential Revision: https://phab.getgrist.com/D3916
2023-07-14 10:05:22 +00:00
|
|
|
const registerWebhook = async (activeDoc: ActiveDoc, req: RequestWithLogin, webhook: WebhookFields) => {
|
|
|
|
const {fields, url} = await getWebhookSettings(activeDoc, req, null, webhook);
|
|
|
|
if (!fields.eventTypes?.length) {
|
|
|
|
throw new ApiError(`eventTypes must be a non-empty array`, 400);
|
|
|
|
}
|
|
|
|
if (!isUrlAllowed(url)) {
|
|
|
|
throw new ApiError('Provided url is forbidden', 403);
|
|
|
|
}
|
|
|
|
if (!fields.tableRef) {
|
|
|
|
throw new ApiError(`tableId is required`, 400);
|
|
|
|
}
|
|
|
|
|
|
|
|
const unsubscribeKey = uuidv4();
|
|
|
|
const webhookSecret: WebHookSecret = {unsubscribeKey, url};
|
|
|
|
const secretValue = JSON.stringify(webhookSecret);
|
|
|
|
const webhookId = (await this._dbManager.addSecret(secretValue, activeDoc.docName)).id;
|
|
|
|
|
|
|
|
try {
|
|
|
|
|
|
|
|
const webhookAction: WebhookAction = {type: "webhook", id: webhookId};
|
|
|
|
const sandboxRes = await handleSandboxError("_grist_Triggers", [], activeDoc.applyUserActions(
|
|
|
|
docSessionFromRequest(req),
|
|
|
|
[['AddRecord', "_grist_Triggers", null, {
|
|
|
|
enabled: true,
|
|
|
|
...fields,
|
|
|
|
actions: JSON.stringify([webhookAction])
|
|
|
|
}]]));
|
|
|
|
return {
|
|
|
|
unsubscribeKey,
|
|
|
|
triggerId: sandboxRes.retValues[0],
|
|
|
|
webhookId,
|
|
|
|
};
|
|
|
|
|
|
|
|
} catch (err) {
|
|
|
|
|
|
|
|
// remove webhook
|
|
|
|
await this._dbManager.removeWebhook(webhookId, activeDoc.docName, '', false);
|
|
|
|
throw err;
|
|
|
|
} finally {
|
|
|
|
await activeDoc.sendWebhookNotification();
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
const removeWebhook = async (activeDoc: ActiveDoc, req: RequestWithLogin, res: Response) => {
|
|
|
|
const {unsubscribeKey} = req.body as WebhookSubscription;
|
|
|
|
const webhookId = req.params.webhookId??req.body.webhookId;
|
|
|
|
|
|
|
|
// owner does not need to provide unsubscribeKey
|
|
|
|
const checkKey = !(await this._isOwner(req));
|
|
|
|
const triggerRowId = activeDoc.triggers.getWebhookTriggerRecord(webhookId).id;
|
|
|
|
// Validate unsubscribeKey before deleting trigger from document
|
|
|
|
await this._dbManager.removeWebhook(webhookId, activeDoc.docName, unsubscribeKey, checkKey);
|
|
|
|
activeDoc.triggers.webhookDeleted(webhookId);
|
|
|
|
|
|
|
|
await handleSandboxError("_grist_Triggers", [], activeDoc.applyUserActions(
|
|
|
|
docSessionFromRequest(req),
|
|
|
|
[['RemoveRecord', "_grist_Triggers", triggerRowId]]));
|
|
|
|
|
|
|
|
await activeDoc.sendWebhookNotification();
|
|
|
|
|
|
|
|
res.json({success: true});
|
|
|
|
};
|
|
|
|
|
|
|
|
async function getWebhookSettings(activeDoc: ActiveDoc, req: RequestWithLogin,
|
|
|
|
webhookId: string|null, webhook: WebhookFields) {
|
2023-05-08 22:06:24 +00:00
|
|
|
const metaTables = await getMetaTables(activeDoc, req);
|
|
|
|
const tablesTable = activeDoc.docData!.getMetaTable("_grist_Tables");
|
|
|
|
const trigger = webhookId ? activeDoc.triggers.getWebhookTriggerRecord(webhookId) : undefined;
|
|
|
|
let currentTableId = trigger ? tablesTable.getValue(trigger.tableRef, 'tableId')! : undefined;
|
2024-04-12 20:04:37 +00:00
|
|
|
const {url, eventTypes, watchedColIds, isReadyColumn, name} = webhook;
|
2023-10-12 17:32:22 +00:00
|
|
|
const tableId = await getRealTableId(req.params.tableId || webhook.tableId, {metaTables});
|
|
|
|
|
2023-05-08 22:06:24 +00:00
|
|
|
const fields: Partial<SchemaTypes['_grist_Triggers']> = {};
|
|
|
|
|
|
|
|
if (url && !isUrlAllowed(url)) {
|
|
|
|
throw new ApiError('Provided url is forbidden', 403);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (eventTypes) {
|
|
|
|
if (!eventTypes.length) {
|
|
|
|
throw new ApiError(`eventTypes must be a non-empty array`, 400);
|
|
|
|
}
|
|
|
|
fields.eventTypes = [GristObjCode.List, ...eventTypes];
|
|
|
|
}
|
|
|
|
|
|
|
|
if (tableId !== undefined) {
|
2024-04-12 20:04:37 +00:00
|
|
|
if (watchedColIds) {
|
|
|
|
if (tableId !== currentTableId && currentTableId) {
|
|
|
|
// if the tableId changed, we need to reset the watchedColIds
|
|
|
|
fields.watchedColRefList = [GristObjCode.List];
|
|
|
|
} else {
|
|
|
|
if (!tableId) {
|
|
|
|
throw new ApiError(`Cannot find columns "${watchedColIds}" because table is not known`, 404);
|
|
|
|
}
|
|
|
|
fields.watchedColRefList = [GristObjCode.List, ...watchedColIds
|
|
|
|
.filter(colId => colId.trim() !== "")
|
|
|
|
.map(
|
|
|
|
colId => { return colIdToReference(metaTables, tableId, colId.trim().replace(/^\$/, '')); }
|
|
|
|
)];
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
fields.watchedColRefList = [GristObjCode.List];
|
|
|
|
}
|
2023-05-08 22:06:24 +00:00
|
|
|
fields.tableRef = tableIdToRef(metaTables, tableId);
|
|
|
|
currentTableId = tableId;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (isReadyColumn !== undefined) {
|
|
|
|
// When isReadyColumn is defined let's explicitly change the ready column to the new col
|
|
|
|
// id, null or empty string being a special case that unsets it.
|
|
|
|
if (isReadyColumn !== null && isReadyColumn !== '') {
|
|
|
|
if (!currentTableId) {
|
|
|
|
throw new ApiError(`Cannot find column "${isReadyColumn}" because table is not known`, 404);
|
|
|
|
}
|
|
|
|
fields.isReadyColRef = colIdToReference(metaTables, currentTableId, isReadyColumn);
|
|
|
|
} else {
|
|
|
|
fields.isReadyColRef = 0;
|
|
|
|
}
|
|
|
|
} else if (tableId) {
|
|
|
|
// When isReadyColumn is undefined but tableId was changed, let's unset the ready column
|
|
|
|
fields.isReadyColRef = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
// assign other field properties
|
(core) API reworked to use POST to create webhook and DELET to remove it
Summary:
introduces POST /api/docs/{docId}/webhooks and DELETE /api/docs/{docId}/webhooks/{webhookId} on place of old _subscribe and _unsubscribe endpoints.
Remove checking for unsubscribeKey while deleting webhook - only owner can delete webhook using DELETE endpoint. subscription key is still needed for _unsubscribe endpoint.
old _unsubscribe and _subscribe endpoints are still active and work as before - no changes there.
Posting schema:
```
POST /api/docs/[docId]/webhooks
```
Request Body:
```
{
"webhooks": [
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb1",
"eventTypes": [
"add",
"update"
],
"enabled": true,
"name": "WebhookName",
"memo": "just a text",
"tableId": "Table1"
}
},
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb2",
"eventTypes": [
"add",
],
"enabled": true,
"name": "OtherWebhookName",
"memo": "just a text",
"tableId": "Table1"
}
}
]
}
```
Expected response: WebhookId for each webhook posted:
```
{
"webhooks": [
{
"id": "85c77108-f1e1-4217-a50d-acd1c5996da2"
},
{
"id": "d87a6402-cfd7-4822-878c-657308fcc8c3"
}
]
}
```
Deleting webhooks:
```
DELETE api/docs/[docId]/webhooks/[webhookId]
```
there is no payload in DELETE request. Therefore only one webhook can be deleted at once
Response:
```
{
"success": true
}
```
Test Plan: Old unit test improved to handle new endpoints, and one more added to check if endpoints are in fact created/removed
Reviewers: alexmojaki
Reviewed By: alexmojaki
Subscribers: paulfitz, alexmojaki
Differential Revision: https://phab.getgrist.com/D3916
2023-07-14 10:05:22 +00:00
|
|
|
Object.assign(fields, _.pick(webhook, ['enabled', 'memo']));
|
2023-05-08 22:06:24 +00:00
|
|
|
if (name) {
|
|
|
|
fields.label = name;
|
|
|
|
}
|
|
|
|
return {
|
|
|
|
fields,
|
|
|
|
url,
|
|
|
|
};
|
|
|
|
}
|
|
|
|
|
2021-08-17 20:38:55 +00:00
|
|
|
// Get the columns of the specified table in recordish format
|
|
|
|
this._app.get('/api/docs/:docId/tables/:tableId/columns', canView,
|
|
|
|
withDoc(async (activeDoc, req, res) => {
|
2023-10-12 17:32:22 +00:00
|
|
|
const tableId = await getRealTableId(req.params.tableId, {activeDoc, req});
|
2023-08-14 14:17:46 +00:00
|
|
|
const includeHidden = isAffirmative(req.query.hidden);
|
2021-10-04 16:14:14 +00:00
|
|
|
const columns = await handleSandboxError('', [],
|
2023-08-11 13:12:43 +00:00
|
|
|
activeDoc.getTableCols(docSessionFromRequest(req), tableId, includeHidden));
|
2021-08-17 20:38:55 +00:00
|
|
|
res.json({columns});
|
|
|
|
})
|
|
|
|
);
|
|
|
|
|
2022-10-20 19:24:14 +00:00
|
|
|
// Get the tables of the specified document in recordish format
|
|
|
|
this._app.get('/api/docs/:docId/tables', canView,
|
|
|
|
withDoc(async (activeDoc, req, res) => {
|
2023-08-14 14:17:46 +00:00
|
|
|
const records = await getTableRecords(activeDoc, req, { optTableId: "_grist_Tables" });
|
(core) Improve API Console and link from Document Settings.
Summary:
Changes to building and serving:
- Remove unpkg dependencies, add npm module for swagger-ui-dist instead.
- Move apiconsole JS logic into core/app/client/apiconsole.ts, and use TypeScript.
- Add symlinks to swagger in static/ and core/static/.
- Refactor loadScript, and add loadCssFile; use these to load swagger-ui resources.
Changes to console itself:
- Support docId, workspaceId, orgId URL parameters. When present, the matching
value in dropdowns is moved to the front and marked as "(Current)".
- Fix the ordering of example values, particularly for workspaces.
- Remove unwanted example values.
- Hide confusing "Authorize" button.
- Hide API keys, and rely consistently on cookies for executing API calls.
Integration into Grist:
- Added a button to Document Settings, just under document ID in "API".
- The button opens a separate page, passing in org, workspace, and doc info for the current doc.
Test Plan: Only tested manually, no automated tests yet.
Reviewers: jarek
Reviewed By: jarek
Differential Revision: https://phab.getgrist.com/D4173
2024-01-27 04:21:34 +00:00
|
|
|
const tables: Types.RecordWithStringId[] = records.map((record) => ({
|
|
|
|
id: String(record.fields.tableId),
|
2022-10-20 19:24:14 +00:00
|
|
|
fields: {
|
|
|
|
..._.omit(record.fields, "tableId"),
|
|
|
|
tableRef: record.id,
|
|
|
|
}
|
|
|
|
})).filter(({id}) => id);
|
|
|
|
res.json({tables});
|
|
|
|
})
|
|
|
|
);
|
|
|
|
|
2020-07-21 13:20:51 +00:00
|
|
|
// The upload should be a multipart post with an 'upload' field containing one or more files.
|
|
|
|
// Returns the list of rowIds for the rows created in the _grist_Attachments table.
|
|
|
|
this._app.post('/api/docs/:docId/attachments', canEdit, withDoc(async (activeDoc, req, res) => {
|
|
|
|
const uploadResult = await handleUpload(req, res);
|
2020-09-11 20:27:09 +00:00
|
|
|
res.json(await activeDoc.addAttachments(docSessionFromRequest(req), uploadResult.uploadId));
|
2020-07-21 13:20:51 +00:00
|
|
|
}));
|
|
|
|
|
2022-05-20 11:50:22 +00:00
|
|
|
// Select the fields from an attachment record that we want to return to the user,
|
|
|
|
// and convert the timeUploaded from a number to an ISO string.
|
|
|
|
function cleanAttachmentRecord(record: MetaRowRecord<"_grist_Attachments">) {
|
|
|
|
const {fileName, fileSize, timeUploaded: time} = record;
|
|
|
|
const timeUploaded = (typeof time === 'number') ? new Date(time).toISOString() : undefined;
|
|
|
|
return {fileName, fileSize, timeUploaded};
|
|
|
|
}
|
|
|
|
|
|
|
|
// Returns cleaned metadata for all attachments in /records format.
|
|
|
|
this._app.get('/api/docs/:docId/attachments', canView, withDoc(async (activeDoc, req, res) => {
|
2023-08-14 14:17:46 +00:00
|
|
|
const rawRecords = await getTableRecords(activeDoc, req, { optTableId: "_grist_Attachments" });
|
2022-05-20 11:50:22 +00:00
|
|
|
const records = rawRecords.map(r => ({
|
|
|
|
id: r.id,
|
|
|
|
fields: cleanAttachmentRecord(r.fields as MetaRowRecord<"_grist_Attachments">),
|
|
|
|
}));
|
|
|
|
res.json({records});
|
|
|
|
}));
|
|
|
|
|
|
|
|
// Returns cleaned metadata for a given attachment ID (i.e. a rowId in _grist_Attachments table).
|
2020-07-21 13:20:51 +00:00
|
|
|
this._app.get('/api/docs/:docId/attachments/:attId', canView, withDoc(async (activeDoc, req, res) => {
|
2022-07-06 22:36:09 +00:00
|
|
|
const attId = integerParam(req.params.attId, 'attId');
|
|
|
|
const attRecord = activeDoc.getAttachmentMetadata(attId);
|
2022-05-20 11:50:22 +00:00
|
|
|
res.json(cleanAttachmentRecord(attRecord));
|
2020-07-21 13:20:51 +00:00
|
|
|
}));
|
|
|
|
|
|
|
|
// Responds with attachment contents, with suitable Content-Type and Content-Disposition.
|
|
|
|
this._app.get('/api/docs/:docId/attachments/:attId/download', canView, withDoc(async (activeDoc, req, res) => {
|
2022-07-06 22:36:09 +00:00
|
|
|
const attId = integerParam(req.params.attId, 'attId');
|
2023-09-05 18:27:35 +00:00
|
|
|
const tableId = optStringParam(req.params.tableId, 'tableId');
|
|
|
|
const colId = optStringParam(req.params.colId, 'colId');
|
|
|
|
const rowId = optIntegerParam(req.params.rowId, 'rowId');
|
2022-07-06 22:36:09 +00:00
|
|
|
if ((tableId || colId || rowId) && !(tableId && colId && rowId)) {
|
|
|
|
throw new ApiError('define all of tableId, colId and rowId, or none.', 400);
|
|
|
|
}
|
|
|
|
const attRecord = activeDoc.getAttachmentMetadata(attId);
|
|
|
|
const cell = (tableId && colId && rowId) ? {tableId, colId, rowId} : undefined;
|
2020-07-21 13:20:51 +00:00
|
|
|
const fileIdent = attRecord.fileIdent as string;
|
|
|
|
const ext = path.extname(fileIdent);
|
|
|
|
const origName = attRecord.fileName as string;
|
|
|
|
const fileName = ext ? path.basename(origName, path.extname(origName)) + ext : origName;
|
2022-11-15 14:37:48 +00:00
|
|
|
const fileData = await activeDoc.getAttachmentData(docSessionFromRequest(req), attRecord, {cell});
|
2020-07-21 13:20:51 +00:00
|
|
|
res.status(200)
|
|
|
|
.type(ext)
|
|
|
|
// Construct a content-disposition header of the form 'attachment; filename="NAME"'
|
|
|
|
.set('Content-Disposition', contentDisposition(fileName, {type: 'attachment'}))
|
|
|
|
.set('Cache-Control', 'private, max-age=3600')
|
|
|
|
.send(fileData);
|
|
|
|
}));
|
|
|
|
|
2022-04-07 12:34:50 +00:00
|
|
|
// Mostly for testing
|
|
|
|
this._app.post('/api/docs/:docId/attachments/updateUsed', canEdit, withDoc(async (activeDoc, req, res) => {
|
2022-05-03 05:20:31 +00:00
|
|
|
await activeDoc.updateUsedAttachmentsIfNeeded();
|
2022-04-07 12:34:50 +00:00
|
|
|
res.json(null);
|
|
|
|
}));
|
2022-04-12 14:33:48 +00:00
|
|
|
this._app.post('/api/docs/:docId/attachments/removeUnused', isOwner, withDoc(async (activeDoc, req, res) => {
|
|
|
|
const expiredOnly = isAffirmative(req.query.expiredonly);
|
|
|
|
const verifyFiles = isAffirmative(req.query.verifyfiles);
|
|
|
|
await activeDoc.removeUnusedAttachments(expiredOnly);
|
|
|
|
if (verifyFiles) {
|
2022-04-22 18:07:14 +00:00
|
|
|
await verifyAttachmentFiles(activeDoc);
|
2022-04-12 14:33:48 +00:00
|
|
|
}
|
|
|
|
res.json(null);
|
|
|
|
}));
|
2022-04-22 18:07:14 +00:00
|
|
|
this._app.post('/api/docs/:docId/attachments/verifyFiles', isOwner, withDoc(async (activeDoc, req, res) => {
|
|
|
|
await verifyAttachmentFiles(activeDoc);
|
|
|
|
res.json(null);
|
|
|
|
}));
|
|
|
|
|
|
|
|
async function verifyAttachmentFiles(activeDoc: ActiveDoc) {
|
|
|
|
assert.deepStrictEqual(
|
|
|
|
await activeDoc.docStorage.all(`SELECT DISTINCT fileIdent AS ident FROM _grist_Attachments ORDER BY ident`),
|
|
|
|
await activeDoc.docStorage.all(`SELECT ident FROM _gristsys_Files ORDER BY ident`),
|
|
|
|
);
|
|
|
|
}
|
2022-04-07 12:34:50 +00:00
|
|
|
|
2021-08-12 14:48:24 +00:00
|
|
|
// Adds records given in a column oriented format,
|
|
|
|
// returns an array of row IDs
|
|
|
|
this._app.post('/api/docs/:docId/tables/:tableId/data', canEdit,
|
|
|
|
withDoc(async (activeDoc, req, res) => {
|
2021-10-15 09:31:13 +00:00
|
|
|
const colValues = req.body as BulkColValues;
|
|
|
|
const count = colValues[Object.keys(colValues)[0]].length;
|
2023-10-12 17:32:22 +00:00
|
|
|
const op = await getTableOperations(req, activeDoc);
|
2022-03-15 14:35:15 +00:00
|
|
|
const ids = await op.addRecords(count, colValues);
|
2021-08-12 14:48:24 +00:00
|
|
|
res.json(ids);
|
|
|
|
})
|
|
|
|
);
|
|
|
|
|
|
|
|
// Adds records given in a record oriented format,
|
|
|
|
// returns in the same format as GET /records but without the fields object for now
|
2023-05-08 22:06:24 +00:00
|
|
|
// WARNING: The `req.body` object is modified in place.
|
|
|
|
this._app.post('/api/docs/:docId/tables/:tableId/records', canEdit,
|
2021-08-12 14:48:24 +00:00
|
|
|
withDoc(async (activeDoc, req, res) => {
|
2023-05-08 22:06:24 +00:00
|
|
|
let body = req.body;
|
|
|
|
if (isAffirmative(req.query.flat)) {
|
|
|
|
if (!body.records && Array.isArray(body)) {
|
|
|
|
for (const [i, rec] of body.entries()) {
|
|
|
|
if (!rec.fields) {
|
|
|
|
// If ids arrive in a loosely formatted flat payload,
|
|
|
|
// remove them since we cannot honor them. If not loosely
|
|
|
|
// formatted, throw an error later. TODO: would be useful
|
|
|
|
// to have a way to exclude or rename fields via query
|
|
|
|
// parameters.
|
|
|
|
if (rec.id) { delete rec.id; }
|
|
|
|
body[i] = {fields: rec};
|
|
|
|
}
|
|
|
|
}
|
|
|
|
body = {records: body};
|
|
|
|
}
|
|
|
|
}
|
|
|
|
validateCore(RecordsPost, req, body);
|
2023-10-12 17:32:22 +00:00
|
|
|
const ops = await getTableOperations(req, activeDoc);
|
2022-03-15 14:35:15 +00:00
|
|
|
const records = await ops.create(body.records);
|
2024-03-20 14:51:59 +00:00
|
|
|
if (req.query.utm_source === 'grist-forms') {
|
|
|
|
activeDoc.logTelemetryEvent(docSessionFromRequest(req), 'submittedForm');
|
|
|
|
}
|
2021-08-12 14:48:24 +00:00
|
|
|
res.json({records});
|
|
|
|
})
|
|
|
|
);
|
2020-07-21 13:20:51 +00:00
|
|
|
|
2023-09-04 13:21:18 +00:00
|
|
|
// A GET /sql endpoint that takes a query like ?q=select+*+from+Table1
|
|
|
|
// Not very useful, apart from testing - see the POST endpoint for
|
|
|
|
// serious use.
|
|
|
|
// If SQL statements that modify the DB are ever supported, they should
|
|
|
|
// not be permitted by this endpoint.
|
|
|
|
this._app.get(
|
|
|
|
'/api/docs/:docId/sql', canView,
|
|
|
|
withDoc(async (activeDoc, req, res) => {
|
|
|
|
const sql = stringParam(req.query.q, 'q');
|
|
|
|
await this._runSql(activeDoc, req, res, { sql });
|
|
|
|
}));
|
|
|
|
|
|
|
|
// A POST /sql endpoint, accepting a body like:
|
|
|
|
// { "sql": "select * from Table1 where name = ?", "args": ["Paul"] }
|
|
|
|
// Only SELECT statements are currently supported.
|
|
|
|
this._app.post(
|
|
|
|
'/api/docs/:docId/sql', canView, validate(SqlPost),
|
|
|
|
withDoc(async (activeDoc, req, res) => {
|
|
|
|
await this._runSql(activeDoc, req, res, req.body);
|
|
|
|
}));
|
|
|
|
|
2022-10-20 19:24:14 +00:00
|
|
|
// Create columns in a table, given as records of the _grist_Tables_column metatable.
|
|
|
|
this._app.post('/api/docs/:docId/tables/:tableId/columns', canEdit, validate(ColumnsPost),
|
|
|
|
withDoc(async (activeDoc, req, res) => {
|
|
|
|
const body = req.body as Types.ColumnsPost;
|
2023-10-12 17:32:22 +00:00
|
|
|
const tableId = await getRealTableId(req.params.tableId, {activeDoc, req});
|
2022-10-20 19:24:14 +00:00
|
|
|
const actions = body.columns.map(({fields, id: colId}) =>
|
|
|
|
// AddVisibleColumn adds the column to all widgets of the table.
|
|
|
|
// This isn't necessarily what the user wants, but it seems like a good default.
|
|
|
|
// Maybe there should be a query param to control this?
|
|
|
|
["AddVisibleColumn", tableId, colId, fields || {}]
|
|
|
|
);
|
|
|
|
const {retValues} = await handleSandboxError(tableId, [],
|
|
|
|
activeDoc.applyUserActions(docSessionFromRequest(req), actions)
|
|
|
|
);
|
|
|
|
const columns = retValues.map(({colId}) => ({id: colId}));
|
|
|
|
res.json({columns});
|
|
|
|
})
|
|
|
|
);
|
|
|
|
|
|
|
|
// Create new tables in a doc. Unlike POST /records or /columns, each 'record' (table) should have a `columns`
|
|
|
|
// property in the same format as POST /columns above, and no `fields` property.
|
|
|
|
this._app.post('/api/docs/:docId/tables', canEdit, validate(TablesPost),
|
|
|
|
withDoc(async (activeDoc, req, res) => {
|
|
|
|
const body = req.body as Types.TablesPost;
|
|
|
|
const actions = body.tables.map(({columns, id}) => {
|
|
|
|
const colInfos = columns.map(({fields, id: colId}) => ({...fields, id: colId}));
|
|
|
|
return ["AddTable", id, colInfos];
|
|
|
|
});
|
|
|
|
const {retValues} = await activeDoc.applyUserActions(docSessionFromRequest(req), actions);
|
|
|
|
const tables = retValues.map(({table_id}) => ({id: table_id}));
|
|
|
|
res.json({tables});
|
|
|
|
})
|
|
|
|
);
|
|
|
|
|
2020-07-21 13:20:51 +00:00
|
|
|
this._app.post('/api/docs/:docId/tables/:tableId/data/delete', canEdit, withDoc(async (activeDoc, req, res) => {
|
|
|
|
const rowIds = req.body;
|
2023-10-12 17:32:22 +00:00
|
|
|
const op = await getTableOperations(req, activeDoc);
|
2022-05-05 11:42:50 +00:00
|
|
|
await op.destroy(rowIds);
|
|
|
|
res.json(null);
|
2020-07-21 13:20:51 +00:00
|
|
|
}));
|
|
|
|
|
|
|
|
// Download full document
|
|
|
|
// TODO: look at download behavior if ActiveDoc is shutdown during call (cannot
|
|
|
|
// use withDoc wrapper)
|
|
|
|
this._app.get('/api/docs/:docId/download', canView, throttled(async (req, res) => {
|
2022-11-09 16:49:23 +00:00
|
|
|
// Support a dryRun flag to check if user has the right to download the
|
|
|
|
// full document.
|
|
|
|
const dryRun = isAffirmative(req.query.dryrun || req.query.dryRun);
|
|
|
|
const dryRunSuccess = () => res.status(200).json({dryRun: 'allowed'});
|
2024-03-06 17:12:42 +00:00
|
|
|
|
|
|
|
const filename = await this._getDownloadFilename(req);
|
|
|
|
|
2020-09-11 20:27:09 +00:00
|
|
|
// We want to be have a way download broken docs that ActiveDoc may not be able
|
|
|
|
// to load. So, if the user owns the document, we unconditionally let them
|
|
|
|
// download.
|
2022-12-02 18:51:44 +00:00
|
|
|
if (await this._isOwner(req, {acceptTrunkForSnapshot: true})) {
|
2022-11-09 16:49:23 +00:00
|
|
|
if (dryRun) { dryRunSuccess(); return; }
|
2020-09-11 20:27:09 +00:00
|
|
|
try {
|
|
|
|
// We carefully avoid creating an ActiveDoc for the document being downloaded,
|
|
|
|
// in case it is broken in some way. It is convenient to be able to download
|
|
|
|
// broken files for diagnosis/recovery.
|
2024-03-06 17:12:42 +00:00
|
|
|
return await this._docWorker.downloadDoc(req, res, this._docManager.storageManager, filename);
|
2020-09-11 20:27:09 +00:00
|
|
|
} catch (e) {
|
|
|
|
if (e.message && e.message.match(/does not exist yet/)) {
|
|
|
|
// The document has never been seen on file system / s3. It may be new, so
|
|
|
|
// we try again after having created an ActiveDoc for the document.
|
|
|
|
await this._getActiveDoc(req);
|
2024-03-06 17:12:42 +00:00
|
|
|
return this._docWorker.downloadDoc(req, res, this._docManager.storageManager, filename);
|
2020-09-11 20:27:09 +00:00
|
|
|
} else {
|
|
|
|
throw e;
|
|
|
|
}
|
2020-07-21 13:20:51 +00:00
|
|
|
}
|
2020-09-11 20:27:09 +00:00
|
|
|
} else {
|
|
|
|
// If the user is not an owner, we load the document as an ActiveDoc, and then
|
|
|
|
// check if the user has download permissions.
|
|
|
|
const activeDoc = await this._getActiveDoc(req);
|
2020-12-11 19:22:35 +00:00
|
|
|
if (!await activeDoc.canDownload(docSessionFromRequest(req))) {
|
2021-11-09 10:34:26 +00:00
|
|
|
throw new ApiError('not authorized to download this document', 403);
|
2020-09-11 20:27:09 +00:00
|
|
|
}
|
2022-11-09 16:49:23 +00:00
|
|
|
if (dryRun) { dryRunSuccess(); return; }
|
2024-03-06 17:12:42 +00:00
|
|
|
return this._docWorker.downloadDoc(req, res, this._docManager.storageManager, filename);
|
2020-07-21 13:20:51 +00:00
|
|
|
}
|
|
|
|
}));
|
|
|
|
|
2021-01-12 15:48:40 +00:00
|
|
|
// Fork the specified document.
|
|
|
|
this._app.post('/api/docs/:docId/fork', canView, withDoc(async (activeDoc, req, res) => {
|
|
|
|
const result = await activeDoc.fork(docSessionFromRequest(req));
|
|
|
|
res.json(result);
|
|
|
|
}));
|
|
|
|
|
|
|
|
// Initiate a fork. Used internally to implement ActiveDoc.fork. Only usable via a Permit.
|
|
|
|
this._app.post('/api/docs/:docId/create-fork', canEdit, throttled(async (req, res) => {
|
2021-11-29 20:12:45 +00:00
|
|
|
const docId = stringParam(req.params.docId, 'docId');
|
|
|
|
const srcDocId = stringParam(req.body.srcDocId, 'srcDocId');
|
2021-04-26 21:54:09 +00:00
|
|
|
if (srcDocId !== req.specialPermit?.otherDocId) { throw new Error('access denied'); }
|
2021-04-28 18:53:18 +00:00
|
|
|
const fname = await this._docManager.storageManager.prepareFork(srcDocId, docId);
|
|
|
|
await filterDocumentInPlace(docSessionFromRequest(req), fname);
|
2021-01-12 15:48:40 +00:00
|
|
|
res.json({srcDocId, docId});
|
|
|
|
}));
|
|
|
|
|
2021-08-12 14:48:24 +00:00
|
|
|
// Update records given in column format
|
|
|
|
// The records to update are identified by their id column.
|
|
|
|
this._app.patch('/api/docs/:docId/tables/:tableId/data', canEdit,
|
|
|
|
withDoc(async (activeDoc, req, res) => {
|
|
|
|
const columnValues = req.body;
|
|
|
|
const rowIds = columnValues.id;
|
|
|
|
// sandbox expects no id column
|
|
|
|
delete columnValues.id;
|
2023-10-12 17:32:22 +00:00
|
|
|
const ops = await getTableOperations(req, activeDoc);
|
2022-03-15 14:35:15 +00:00
|
|
|
await ops.updateRecords(columnValues, rowIds);
|
2021-08-12 14:48:24 +00:00
|
|
|
res.json(null);
|
|
|
|
})
|
|
|
|
);
|
|
|
|
|
|
|
|
// Update records given in records format
|
2021-10-15 09:31:13 +00:00
|
|
|
this._app.patch('/api/docs/:docId/tables/:tableId/records', canEdit, validate(RecordsPatch),
|
2021-08-12 14:48:24 +00:00
|
|
|
withDoc(async (activeDoc, req, res) => {
|
2021-10-15 09:31:13 +00:00
|
|
|
const body = req.body as Types.RecordsPatch;
|
2023-10-12 17:32:22 +00:00
|
|
|
const ops = await getTableOperations(req, activeDoc);
|
2022-03-15 14:35:15 +00:00
|
|
|
await ops.update(body.records);
|
2021-08-12 14:48:24 +00:00
|
|
|
res.json(null);
|
|
|
|
})
|
|
|
|
);
|
2020-07-21 13:20:51 +00:00
|
|
|
|
2022-10-20 19:24:14 +00:00
|
|
|
// Update columns given in records format
|
|
|
|
this._app.patch('/api/docs/:docId/tables/:tableId/columns', canEdit, validate(ColumnsPatch),
|
|
|
|
withDoc(async (activeDoc, req, res) => {
|
|
|
|
const tablesTable = activeDoc.docData!.getMetaTable("_grist_Tables");
|
|
|
|
const columnsTable = activeDoc.docData!.getMetaTable("_grist_Tables_column");
|
2023-10-12 17:32:22 +00:00
|
|
|
const tableId = await getRealTableId(req.params.tableId, {activeDoc, req});
|
2022-10-20 19:24:14 +00:00
|
|
|
const tableRef = tablesTable.findMatchingRowId({tableId});
|
|
|
|
if (!tableRef) {
|
|
|
|
throw new ApiError(`Table not found "${tableId}"`, 404);
|
|
|
|
}
|
|
|
|
const body = req.body as Types.ColumnsPatch;
|
|
|
|
const columns: Types.Record[] = body.columns.map((col) => {
|
|
|
|
const id = columnsTable.findMatchingRowId({parentId: tableRef, colId: col.id});
|
|
|
|
if (!id) {
|
|
|
|
throw new ApiError(`Column not found "${col.id}"`, 404);
|
|
|
|
}
|
|
|
|
return {...col, id};
|
|
|
|
});
|
2023-10-12 17:32:22 +00:00
|
|
|
const ops = await getTableOperations(req, activeDoc, "_grist_Tables_column");
|
2022-10-20 19:24:14 +00:00
|
|
|
await ops.update(columns);
|
|
|
|
res.json(null);
|
|
|
|
})
|
|
|
|
);
|
|
|
|
|
|
|
|
// Update tables given in records format
|
|
|
|
this._app.patch('/api/docs/:docId/tables', canEdit, validate(TablesPatch),
|
|
|
|
withDoc(async (activeDoc, req, res) => {
|
|
|
|
const tablesTable = activeDoc.docData!.getMetaTable("_grist_Tables");
|
|
|
|
const body = req.body as Types.TablesPatch;
|
|
|
|
const tables: Types.Record[] = body.tables.map((table) => {
|
|
|
|
const id = tablesTable.findMatchingRowId({tableId: table.id});
|
|
|
|
if (!id) {
|
|
|
|
throw new ApiError(`Table not found "${table.id}"`, 404);
|
|
|
|
}
|
|
|
|
return {...table, id};
|
|
|
|
});
|
2023-10-12 17:32:22 +00:00
|
|
|
const ops = await getTableOperations(req, activeDoc, "_grist_Tables");
|
2022-10-20 19:24:14 +00:00
|
|
|
await ops.update(tables);
|
|
|
|
res.json(null);
|
|
|
|
})
|
|
|
|
);
|
|
|
|
|
2022-02-11 13:10:53 +00:00
|
|
|
// Add or update records given in records format
|
|
|
|
this._app.put('/api/docs/:docId/tables/:tableId/records', canEdit, validate(RecordsPut),
|
|
|
|
withDoc(async (activeDoc, req, res) => {
|
2023-10-12 17:32:22 +00:00
|
|
|
const ops = await getTableOperations(req, activeDoc);
|
2022-03-15 14:35:15 +00:00
|
|
|
const body = req.body as Types.RecordsPut;
|
2022-02-11 13:10:53 +00:00
|
|
|
const options = {
|
2022-03-15 14:35:15 +00:00
|
|
|
add: !isAffirmative(req.query.noadd),
|
|
|
|
update: !isAffirmative(req.query.noupdate),
|
2023-09-05 18:27:35 +00:00
|
|
|
onMany: stringParam(req.query.onmany || "first", "onmany", {
|
|
|
|
allowed: ["first", "none", "all"],
|
|
|
|
}) as 'first'|'none'|'all'|undefined,
|
2022-03-15 14:35:15 +00:00
|
|
|
allowEmptyRequire: isAffirmative(req.query.allow_empty_require),
|
2022-02-11 13:10:53 +00:00
|
|
|
};
|
2022-03-15 14:35:15 +00:00
|
|
|
await ops.upsert(body.records, options);
|
2022-02-11 13:10:53 +00:00
|
|
|
res.json(null);
|
|
|
|
})
|
|
|
|
);
|
|
|
|
|
2023-08-11 13:12:43 +00:00
|
|
|
// Add or update records given in records format
|
|
|
|
this._app.put('/api/docs/:docId/tables/:tableId/columns', canEdit, validate(ColumnsPut),
|
|
|
|
withDoc(async (activeDoc, req, res) => {
|
|
|
|
const tablesTable = activeDoc.docData!.getMetaTable("_grist_Tables");
|
|
|
|
const columnsTable = activeDoc.docData!.getMetaTable("_grist_Tables_column");
|
2023-10-12 17:32:22 +00:00
|
|
|
const tableId = await getRealTableId(req.params.tableId, {activeDoc, req});
|
2023-08-11 13:12:43 +00:00
|
|
|
const tableRef = tablesTable.findMatchingRowId({tableId});
|
|
|
|
if (!tableRef) {
|
|
|
|
throw new ApiError(`Table not found "${tableId}"`, 404);
|
|
|
|
}
|
|
|
|
const body = req.body as Types.ColumnsPut;
|
|
|
|
|
|
|
|
const addActions: UserAction[] = [];
|
|
|
|
const updateActions: UserAction[] = [];
|
|
|
|
const updatedColumnsIds = new Set();
|
|
|
|
|
|
|
|
for (const col of body.columns) {
|
|
|
|
const id = columnsTable.findMatchingRowId({parentId: tableRef, colId: col.id});
|
|
|
|
if (id) {
|
|
|
|
updateActions.push( ['UpdateRecord', '_grist_Tables_column', id, col.fields] );
|
|
|
|
updatedColumnsIds.add( id );
|
|
|
|
} else {
|
|
|
|
addActions.push( ['AddVisibleColumn', tableId, col.id, col.fields] );
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
const getRemoveAction = async () => {
|
|
|
|
const columns = await handleSandboxError('', [],
|
|
|
|
activeDoc.getTableCols(docSessionFromRequest(req), tableId));
|
|
|
|
const columnsToRemove = columns
|
|
|
|
.map(col => col.fields.colRef as number)
|
|
|
|
.filter(colRef => !updatedColumnsIds.has(colRef));
|
|
|
|
|
|
|
|
return [ 'BulkRemoveRecord', '_grist_Tables_column', columnsToRemove ];
|
|
|
|
};
|
|
|
|
|
|
|
|
const actions = [
|
|
|
|
...(!isAffirmative(req.query.noupdate) ? updateActions : []),
|
|
|
|
...(!isAffirmative(req.query.noadd) ? addActions : []),
|
|
|
|
...(isAffirmative(req.query.replaceall) ? [ await getRemoveAction() ] : [] )
|
|
|
|
];
|
|
|
|
await handleSandboxError(tableId, [],
|
|
|
|
activeDoc.applyUserActions(docSessionFromRequest(req), actions)
|
|
|
|
);
|
|
|
|
res.json(null);
|
|
|
|
})
|
|
|
|
);
|
|
|
|
|
2023-08-24 12:33:53 +00:00
|
|
|
this._app.delete('/api/docs/:docId/tables/:tableId/columns/:colId', canEdit,
|
|
|
|
withDoc(async (activeDoc, req, res) => {
|
2023-10-12 17:32:22 +00:00
|
|
|
const {colId} = req.params;
|
|
|
|
const tableId = await getRealTableId(req.params.tableId, {activeDoc, req});
|
2023-08-24 12:33:53 +00:00
|
|
|
const actions = [ [ 'RemoveColumn', tableId, colId ] ];
|
|
|
|
await handleSandboxError(tableId, [colId],
|
|
|
|
activeDoc.applyUserActions(docSessionFromRequest(req), actions)
|
|
|
|
);
|
|
|
|
res.json(null);
|
|
|
|
})
|
|
|
|
);
|
|
|
|
|
(core) Initial webhooks implementation
Summary:
See https://grist.quip.com/VKd3ASF99ezD/Outgoing-Webhooks
- 2 new DocApi endpoints: _subscribe and _unsubscribe, not meant to be user friendly or publicly documented. _unsubscribe should be given the response from _subscribe in the body, e.g:
```
$ curl -X POST -H "Authorization: Bearer 8fd4dc59ecb05ab29ae5a183c03101319b8e6ca9" "http://localhost:8080/api/docs/6WYa23FqWxGNe3AR6DLjCJ/tables/Table2/_subscribe" -H "Content-type: application/json" -d '{"url": "https://webhook.site/a916b526-8afc-46e6-aa8f-a625d0d83ec3", "eventTypes": ["add"], "isReadyColumn": "C"}'
{"unsubscribeKey":"3246f158-55b5-4fc7-baa5-093b75ffa86c","triggerId":2,"webhookId":"853b4bfa-9d39-4639-aa33-7d45354903c0"}
$ curl -X POST -H "Authorization: Bearer 8fd4dc59ecb05ab29ae5a183c03101319b8e6ca9" "http://localhost:8080/api/docs/6WYa23FqWxGNe3AR6DLjCJ/tables/Table2/_unsubscribe" -H "Content-type: application/json" -d '{"unsubscribeKey":"3246f158-55b5-4fc7-baa5-093b75ffa86c","triggerId":2,"webhookId":"853b4bfa-9d39-4639-aa33-7d45354903c0"}'
{"success":true}
```
- New DB entity Secret to hold the webhook URL and unsubscribe key
- New document metatable _grist_Triggers subscribes to table changes and points to a secret to use for a webhook
- New file Triggers.ts processes action summaries and uses the two new tables to send webhooks.
- Also went on a bit of a diversion and made a typesafe subclass of TableData for metatables.
I think this is essentially good enough for a first diff, to keep the diffs manageable and to talk about the overall structure. Future diffs can add tests and more robustness using redis etc. After this diff I can also start building the Zapier integration privately.
Test Plan: Tested manually: see curl commands in summary for an example. Payloads can be seen in https://webhook.site/#!/a916b526-8afc-46e6-aa8f-a625d0d83ec3/0b9fe335-33f7-49fe-b90b-2db5ba53382d/1 . Great site for testing webhooks btw.
Reviewers: dsagal, paulfitz
Reviewed By: paulfitz
Differential Revision: https://phab.getgrist.com/D3019
2021-09-22 23:06:23 +00:00
|
|
|
// Add a new webhook and trigger
|
(core) API reworked to use POST to create webhook and DELET to remove it
Summary:
introduces POST /api/docs/{docId}/webhooks and DELETE /api/docs/{docId}/webhooks/{webhookId} on place of old _subscribe and _unsubscribe endpoints.
Remove checking for unsubscribeKey while deleting webhook - only owner can delete webhook using DELETE endpoint. subscription key is still needed for _unsubscribe endpoint.
old _unsubscribe and _subscribe endpoints are still active and work as before - no changes there.
Posting schema:
```
POST /api/docs/[docId]/webhooks
```
Request Body:
```
{
"webhooks": [
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb1",
"eventTypes": [
"add",
"update"
],
"enabled": true,
"name": "WebhookName",
"memo": "just a text",
"tableId": "Table1"
}
},
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb2",
"eventTypes": [
"add",
],
"enabled": true,
"name": "OtherWebhookName",
"memo": "just a text",
"tableId": "Table1"
}
}
]
}
```
Expected response: WebhookId for each webhook posted:
```
{
"webhooks": [
{
"id": "85c77108-f1e1-4217-a50d-acd1c5996da2"
},
{
"id": "d87a6402-cfd7-4822-878c-657308fcc8c3"
}
]
}
```
Deleting webhooks:
```
DELETE api/docs/[docId]/webhooks/[webhookId]
```
there is no payload in DELETE request. Therefore only one webhook can be deleted at once
Response:
```
{
"success": true
}
```
Test Plan: Old unit test improved to handle new endpoints, and one more added to check if endpoints are in fact created/removed
Reviewers: alexmojaki
Reviewed By: alexmojaki
Subscribers: paulfitz, alexmojaki
Differential Revision: https://phab.getgrist.com/D3916
2023-07-14 10:05:22 +00:00
|
|
|
this._app.post('/api/docs/:docId/webhooks', isOwner, validate(WebhookSubscribeCollection),
|
2023-11-14 13:46:33 +00:00
|
|
|
withDocTriggersLock(async (activeDoc, req, res) => {
|
(core) API reworked to use POST to create webhook and DELET to remove it
Summary:
introduces POST /api/docs/{docId}/webhooks and DELETE /api/docs/{docId}/webhooks/{webhookId} on place of old _subscribe and _unsubscribe endpoints.
Remove checking for unsubscribeKey while deleting webhook - only owner can delete webhook using DELETE endpoint. subscription key is still needed for _unsubscribe endpoint.
old _unsubscribe and _subscribe endpoints are still active and work as before - no changes there.
Posting schema:
```
POST /api/docs/[docId]/webhooks
```
Request Body:
```
{
"webhooks": [
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb1",
"eventTypes": [
"add",
"update"
],
"enabled": true,
"name": "WebhookName",
"memo": "just a text",
"tableId": "Table1"
}
},
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb2",
"eventTypes": [
"add",
],
"enabled": true,
"name": "OtherWebhookName",
"memo": "just a text",
"tableId": "Table1"
}
}
]
}
```
Expected response: WebhookId for each webhook posted:
```
{
"webhooks": [
{
"id": "85c77108-f1e1-4217-a50d-acd1c5996da2"
},
{
"id": "d87a6402-cfd7-4822-878c-657308fcc8c3"
}
]
}
```
Deleting webhooks:
```
DELETE api/docs/[docId]/webhooks/[webhookId]
```
there is no payload in DELETE request. Therefore only one webhook can be deleted at once
Response:
```
{
"success": true
}
```
Test Plan: Old unit test improved to handle new endpoints, and one more added to check if endpoints are in fact created/removed
Reviewers: alexmojaki
Reviewed By: alexmojaki
Subscribers: paulfitz, alexmojaki
Differential Revision: https://phab.getgrist.com/D3916
2023-07-14 10:05:22 +00:00
|
|
|
const registeredWebhooks: Array<WebhookSubscription> = [];
|
|
|
|
for(const webhook of req.body.webhooks) {
|
|
|
|
const registeredWebhook = await registerWebhook(activeDoc, req, webhook.fields);
|
|
|
|
registeredWebhooks.push(registeredWebhook);
|
2023-03-01 20:43:22 +00:00
|
|
|
}
|
(core) API reworked to use POST to create webhook and DELET to remove it
Summary:
introduces POST /api/docs/{docId}/webhooks and DELETE /api/docs/{docId}/webhooks/{webhookId} on place of old _subscribe and _unsubscribe endpoints.
Remove checking for unsubscribeKey while deleting webhook - only owner can delete webhook using DELETE endpoint. subscription key is still needed for _unsubscribe endpoint.
old _unsubscribe and _subscribe endpoints are still active and work as before - no changes there.
Posting schema:
```
POST /api/docs/[docId]/webhooks
```
Request Body:
```
{
"webhooks": [
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb1",
"eventTypes": [
"add",
"update"
],
"enabled": true,
"name": "WebhookName",
"memo": "just a text",
"tableId": "Table1"
}
},
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb2",
"eventTypes": [
"add",
],
"enabled": true,
"name": "OtherWebhookName",
"memo": "just a text",
"tableId": "Table1"
}
}
]
}
```
Expected response: WebhookId for each webhook posted:
```
{
"webhooks": [
{
"id": "85c77108-f1e1-4217-a50d-acd1c5996da2"
},
{
"id": "d87a6402-cfd7-4822-878c-657308fcc8c3"
}
]
}
```
Deleting webhooks:
```
DELETE api/docs/[docId]/webhooks/[webhookId]
```
there is no payload in DELETE request. Therefore only one webhook can be deleted at once
Response:
```
{
"success": true
}
```
Test Plan: Old unit test improved to handle new endpoints, and one more added to check if endpoints are in fact created/removed
Reviewers: alexmojaki
Reviewed By: alexmojaki
Subscribers: paulfitz, alexmojaki
Differential Revision: https://phab.getgrist.com/D3916
2023-07-14 10:05:22 +00:00
|
|
|
res.json({webhooks: registeredWebhooks.map(rw=> {
|
|
|
|
return {id: rw.webhookId};
|
|
|
|
})});
|
(core) Initial webhooks implementation
Summary:
See https://grist.quip.com/VKd3ASF99ezD/Outgoing-Webhooks
- 2 new DocApi endpoints: _subscribe and _unsubscribe, not meant to be user friendly or publicly documented. _unsubscribe should be given the response from _subscribe in the body, e.g:
```
$ curl -X POST -H "Authorization: Bearer 8fd4dc59ecb05ab29ae5a183c03101319b8e6ca9" "http://localhost:8080/api/docs/6WYa23FqWxGNe3AR6DLjCJ/tables/Table2/_subscribe" -H "Content-type: application/json" -d '{"url": "https://webhook.site/a916b526-8afc-46e6-aa8f-a625d0d83ec3", "eventTypes": ["add"], "isReadyColumn": "C"}'
{"unsubscribeKey":"3246f158-55b5-4fc7-baa5-093b75ffa86c","triggerId":2,"webhookId":"853b4bfa-9d39-4639-aa33-7d45354903c0"}
$ curl -X POST -H "Authorization: Bearer 8fd4dc59ecb05ab29ae5a183c03101319b8e6ca9" "http://localhost:8080/api/docs/6WYa23FqWxGNe3AR6DLjCJ/tables/Table2/_unsubscribe" -H "Content-type: application/json" -d '{"unsubscribeKey":"3246f158-55b5-4fc7-baa5-093b75ffa86c","triggerId":2,"webhookId":"853b4bfa-9d39-4639-aa33-7d45354903c0"}'
{"success":true}
```
- New DB entity Secret to hold the webhook URL and unsubscribe key
- New document metatable _grist_Triggers subscribes to table changes and points to a secret to use for a webhook
- New file Triggers.ts processes action summaries and uses the two new tables to send webhooks.
- Also went on a bit of a diversion and made a typesafe subclass of TableData for metatables.
I think this is essentially good enough for a first diff, to keep the diffs manageable and to talk about the overall structure. Future diffs can add tests and more robustness using redis etc. After this diff I can also start building the Zapier integration privately.
Test Plan: Tested manually: see curl commands in summary for an example. Payloads can be seen in https://webhook.site/#!/a916b526-8afc-46e6-aa8f-a625d0d83ec3/0b9fe335-33f7-49fe-b90b-2db5ba53382d/1 . Great site for testing webhooks btw.
Reviewers: dsagal, paulfitz
Reviewed By: paulfitz
Differential Revision: https://phab.getgrist.com/D3019
2021-09-22 23:06:23 +00:00
|
|
|
})
|
|
|
|
);
|
|
|
|
|
(core) API reworked to use POST to create webhook and DELET to remove it
Summary:
introduces POST /api/docs/{docId}/webhooks and DELETE /api/docs/{docId}/webhooks/{webhookId} on place of old _subscribe and _unsubscribe endpoints.
Remove checking for unsubscribeKey while deleting webhook - only owner can delete webhook using DELETE endpoint. subscription key is still needed for _unsubscribe endpoint.
old _unsubscribe and _subscribe endpoints are still active and work as before - no changes there.
Posting schema:
```
POST /api/docs/[docId]/webhooks
```
Request Body:
```
{
"webhooks": [
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb1",
"eventTypes": [
"add",
"update"
],
"enabled": true,
"name": "WebhookName",
"memo": "just a text",
"tableId": "Table1"
}
},
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb2",
"eventTypes": [
"add",
],
"enabled": true,
"name": "OtherWebhookName",
"memo": "just a text",
"tableId": "Table1"
}
}
]
}
```
Expected response: WebhookId for each webhook posted:
```
{
"webhooks": [
{
"id": "85c77108-f1e1-4217-a50d-acd1c5996da2"
},
{
"id": "d87a6402-cfd7-4822-878c-657308fcc8c3"
}
]
}
```
Deleting webhooks:
```
DELETE api/docs/[docId]/webhooks/[webhookId]
```
there is no payload in DELETE request. Therefore only one webhook can be deleted at once
Response:
```
{
"success": true
}
```
Test Plan: Old unit test improved to handle new endpoints, and one more added to check if endpoints are in fact created/removed
Reviewers: alexmojaki
Reviewed By: alexmojaki
Subscribers: paulfitz, alexmojaki
Differential Revision: https://phab.getgrist.com/D3916
2023-07-14 10:05:22 +00:00
|
|
|
/**
|
|
|
|
@deprecated please call to POST /webhooks instead, this endpoint is only for sake of backward compatibility
|
|
|
|
*/
|
|
|
|
this._app.post('/api/docs/:docId/tables/:tableId/_subscribe', isOwner, validate(WebhookSubscribe),
|
2023-11-14 13:46:33 +00:00
|
|
|
withDocTriggersLock(async (activeDoc, req, res) => {
|
(core) API reworked to use POST to create webhook and DELET to remove it
Summary:
introduces POST /api/docs/{docId}/webhooks and DELETE /api/docs/{docId}/webhooks/{webhookId} on place of old _subscribe and _unsubscribe endpoints.
Remove checking for unsubscribeKey while deleting webhook - only owner can delete webhook using DELETE endpoint. subscription key is still needed for _unsubscribe endpoint.
old _unsubscribe and _subscribe endpoints are still active and work as before - no changes there.
Posting schema:
```
POST /api/docs/[docId]/webhooks
```
Request Body:
```
{
"webhooks": [
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb1",
"eventTypes": [
"add",
"update"
],
"enabled": true,
"name": "WebhookName",
"memo": "just a text",
"tableId": "Table1"
}
},
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb2",
"eventTypes": [
"add",
],
"enabled": true,
"name": "OtherWebhookName",
"memo": "just a text",
"tableId": "Table1"
}
}
]
}
```
Expected response: WebhookId for each webhook posted:
```
{
"webhooks": [
{
"id": "85c77108-f1e1-4217-a50d-acd1c5996da2"
},
{
"id": "d87a6402-cfd7-4822-878c-657308fcc8c3"
}
]
}
```
Deleting webhooks:
```
DELETE api/docs/[docId]/webhooks/[webhookId]
```
there is no payload in DELETE request. Therefore only one webhook can be deleted at once
Response:
```
{
"success": true
}
```
Test Plan: Old unit test improved to handle new endpoints, and one more added to check if endpoints are in fact created/removed
Reviewers: alexmojaki
Reviewed By: alexmojaki
Subscribers: paulfitz, alexmojaki
Differential Revision: https://phab.getgrist.com/D3916
2023-07-14 10:05:22 +00:00
|
|
|
const registeredWebhook = await registerWebhook(activeDoc, req, req.body);
|
|
|
|
res.json(registeredWebhook);
|
|
|
|
})
|
|
|
|
);
|
(core) Initial webhooks implementation
Summary:
See https://grist.quip.com/VKd3ASF99ezD/Outgoing-Webhooks
- 2 new DocApi endpoints: _subscribe and _unsubscribe, not meant to be user friendly or publicly documented. _unsubscribe should be given the response from _subscribe in the body, e.g:
```
$ curl -X POST -H "Authorization: Bearer 8fd4dc59ecb05ab29ae5a183c03101319b8e6ca9" "http://localhost:8080/api/docs/6WYa23FqWxGNe3AR6DLjCJ/tables/Table2/_subscribe" -H "Content-type: application/json" -d '{"url": "https://webhook.site/a916b526-8afc-46e6-aa8f-a625d0d83ec3", "eventTypes": ["add"], "isReadyColumn": "C"}'
{"unsubscribeKey":"3246f158-55b5-4fc7-baa5-093b75ffa86c","triggerId":2,"webhookId":"853b4bfa-9d39-4639-aa33-7d45354903c0"}
$ curl -X POST -H "Authorization: Bearer 8fd4dc59ecb05ab29ae5a183c03101319b8e6ca9" "http://localhost:8080/api/docs/6WYa23FqWxGNe3AR6DLjCJ/tables/Table2/_unsubscribe" -H "Content-type: application/json" -d '{"unsubscribeKey":"3246f158-55b5-4fc7-baa5-093b75ffa86c","triggerId":2,"webhookId":"853b4bfa-9d39-4639-aa33-7d45354903c0"}'
{"success":true}
```
- New DB entity Secret to hold the webhook URL and unsubscribe key
- New document metatable _grist_Triggers subscribes to table changes and points to a secret to use for a webhook
- New file Triggers.ts processes action summaries and uses the two new tables to send webhooks.
- Also went on a bit of a diversion and made a typesafe subclass of TableData for metatables.
I think this is essentially good enough for a first diff, to keep the diffs manageable and to talk about the overall structure. Future diffs can add tests and more robustness using redis etc. After this diff I can also start building the Zapier integration privately.
Test Plan: Tested manually: see curl commands in summary for an example. Payloads can be seen in https://webhook.site/#!/a916b526-8afc-46e6-aa8f-a625d0d83ec3/0b9fe335-33f7-49fe-b90b-2db5ba53382d/1 . Great site for testing webhooks btw.
Reviewers: dsagal, paulfitz
Reviewed By: paulfitz
Differential Revision: https://phab.getgrist.com/D3019
2021-09-22 23:06:23 +00:00
|
|
|
|
(core) API reworked to use POST to create webhook and DELET to remove it
Summary:
introduces POST /api/docs/{docId}/webhooks and DELETE /api/docs/{docId}/webhooks/{webhookId} on place of old _subscribe and _unsubscribe endpoints.
Remove checking for unsubscribeKey while deleting webhook - only owner can delete webhook using DELETE endpoint. subscription key is still needed for _unsubscribe endpoint.
old _unsubscribe and _subscribe endpoints are still active and work as before - no changes there.
Posting schema:
```
POST /api/docs/[docId]/webhooks
```
Request Body:
```
{
"webhooks": [
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb1",
"eventTypes": [
"add",
"update"
],
"enabled": true,
"name": "WebhookName",
"memo": "just a text",
"tableId": "Table1"
}
},
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb2",
"eventTypes": [
"add",
],
"enabled": true,
"name": "OtherWebhookName",
"memo": "just a text",
"tableId": "Table1"
}
}
]
}
```
Expected response: WebhookId for each webhook posted:
```
{
"webhooks": [
{
"id": "85c77108-f1e1-4217-a50d-acd1c5996da2"
},
{
"id": "d87a6402-cfd7-4822-878c-657308fcc8c3"
}
]
}
```
Deleting webhooks:
```
DELETE api/docs/[docId]/webhooks/[webhookId]
```
there is no payload in DELETE request. Therefore only one webhook can be deleted at once
Response:
```
{
"success": true
}
```
Test Plan: Old unit test improved to handle new endpoints, and one more added to check if endpoints are in fact created/removed
Reviewers: alexmojaki
Reviewed By: alexmojaki
Subscribers: paulfitz, alexmojaki
Differential Revision: https://phab.getgrist.com/D3916
2023-07-14 10:05:22 +00:00
|
|
|
// Clears all outgoing webhooks in the queue for this document.
|
|
|
|
this._app.delete('/api/docs/:docId/webhooks/queue', isOwner,
|
2023-11-14 13:46:33 +00:00
|
|
|
withDocTriggersLock(async (activeDoc, req, res) => {
|
(core) API reworked to use POST to create webhook and DELET to remove it
Summary:
introduces POST /api/docs/{docId}/webhooks and DELETE /api/docs/{docId}/webhooks/{webhookId} on place of old _subscribe and _unsubscribe endpoints.
Remove checking for unsubscribeKey while deleting webhook - only owner can delete webhook using DELETE endpoint. subscription key is still needed for _unsubscribe endpoint.
old _unsubscribe and _subscribe endpoints are still active and work as before - no changes there.
Posting schema:
```
POST /api/docs/[docId]/webhooks
```
Request Body:
```
{
"webhooks": [
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb1",
"eventTypes": [
"add",
"update"
],
"enabled": true,
"name": "WebhookName",
"memo": "just a text",
"tableId": "Table1"
}
},
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb2",
"eventTypes": [
"add",
],
"enabled": true,
"name": "OtherWebhookName",
"memo": "just a text",
"tableId": "Table1"
}
}
]
}
```
Expected response: WebhookId for each webhook posted:
```
{
"webhooks": [
{
"id": "85c77108-f1e1-4217-a50d-acd1c5996da2"
},
{
"id": "d87a6402-cfd7-4822-878c-657308fcc8c3"
}
]
}
```
Deleting webhooks:
```
DELETE api/docs/[docId]/webhooks/[webhookId]
```
there is no payload in DELETE request. Therefore only one webhook can be deleted at once
Response:
```
{
"success": true
}
```
Test Plan: Old unit test improved to handle new endpoints, and one more added to check if endpoints are in fact created/removed
Reviewers: alexmojaki
Reviewed By: alexmojaki
Subscribers: paulfitz, alexmojaki
Differential Revision: https://phab.getgrist.com/D3916
2023-07-14 10:05:22 +00:00
|
|
|
await activeDoc.clearWebhookQueue();
|
2023-05-08 22:06:24 +00:00
|
|
|
await activeDoc.sendWebhookNotification();
|
(core) Initial webhooks implementation
Summary:
See https://grist.quip.com/VKd3ASF99ezD/Outgoing-Webhooks
- 2 new DocApi endpoints: _subscribe and _unsubscribe, not meant to be user friendly or publicly documented. _unsubscribe should be given the response from _subscribe in the body, e.g:
```
$ curl -X POST -H "Authorization: Bearer 8fd4dc59ecb05ab29ae5a183c03101319b8e6ca9" "http://localhost:8080/api/docs/6WYa23FqWxGNe3AR6DLjCJ/tables/Table2/_subscribe" -H "Content-type: application/json" -d '{"url": "https://webhook.site/a916b526-8afc-46e6-aa8f-a625d0d83ec3", "eventTypes": ["add"], "isReadyColumn": "C"}'
{"unsubscribeKey":"3246f158-55b5-4fc7-baa5-093b75ffa86c","triggerId":2,"webhookId":"853b4bfa-9d39-4639-aa33-7d45354903c0"}
$ curl -X POST -H "Authorization: Bearer 8fd4dc59ecb05ab29ae5a183c03101319b8e6ca9" "http://localhost:8080/api/docs/6WYa23FqWxGNe3AR6DLjCJ/tables/Table2/_unsubscribe" -H "Content-type: application/json" -d '{"unsubscribeKey":"3246f158-55b5-4fc7-baa5-093b75ffa86c","triggerId":2,"webhookId":"853b4bfa-9d39-4639-aa33-7d45354903c0"}'
{"success":true}
```
- New DB entity Secret to hold the webhook URL and unsubscribe key
- New document metatable _grist_Triggers subscribes to table changes and points to a secret to use for a webhook
- New file Triggers.ts processes action summaries and uses the two new tables to send webhooks.
- Also went on a bit of a diversion and made a typesafe subclass of TableData for metatables.
I think this is essentially good enough for a first diff, to keep the diffs manageable and to talk about the overall structure. Future diffs can add tests and more robustness using redis etc. After this diff I can also start building the Zapier integration privately.
Test Plan: Tested manually: see curl commands in summary for an example. Payloads can be seen in https://webhook.site/#!/a916b526-8afc-46e6-aa8f-a625d0d83ec3/0b9fe335-33f7-49fe-b90b-2db5ba53382d/1 . Great site for testing webhooks btw.
Reviewers: dsagal, paulfitz
Reviewed By: paulfitz
Differential Revision: https://phab.getgrist.com/D3019
2021-09-22 23:06:23 +00:00
|
|
|
res.json({success: true});
|
|
|
|
})
|
2022-11-30 12:04:27 +00:00
|
|
|
);
|
|
|
|
|
(core) API reworked to use POST to create webhook and DELET to remove it
Summary:
introduces POST /api/docs/{docId}/webhooks and DELETE /api/docs/{docId}/webhooks/{webhookId} on place of old _subscribe and _unsubscribe endpoints.
Remove checking for unsubscribeKey while deleting webhook - only owner can delete webhook using DELETE endpoint. subscription key is still needed for _unsubscribe endpoint.
old _unsubscribe and _subscribe endpoints are still active and work as before - no changes there.
Posting schema:
```
POST /api/docs/[docId]/webhooks
```
Request Body:
```
{
"webhooks": [
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb1",
"eventTypes": [
"add",
"update"
],
"enabled": true,
"name": "WebhookName",
"memo": "just a text",
"tableId": "Table1"
}
},
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb2",
"eventTypes": [
"add",
],
"enabled": true,
"name": "OtherWebhookName",
"memo": "just a text",
"tableId": "Table1"
}
}
]
}
```
Expected response: WebhookId for each webhook posted:
```
{
"webhooks": [
{
"id": "85c77108-f1e1-4217-a50d-acd1c5996da2"
},
{
"id": "d87a6402-cfd7-4822-878c-657308fcc8c3"
}
]
}
```
Deleting webhooks:
```
DELETE api/docs/[docId]/webhooks/[webhookId]
```
there is no payload in DELETE request. Therefore only one webhook can be deleted at once
Response:
```
{
"success": true
}
```
Test Plan: Old unit test improved to handle new endpoints, and one more added to check if endpoints are in fact created/removed
Reviewers: alexmojaki
Reviewed By: alexmojaki
Subscribers: paulfitz, alexmojaki
Differential Revision: https://phab.getgrist.com/D3916
2023-07-14 10:05:22 +00:00
|
|
|
// Remove webhook and trigger created above
|
|
|
|
this._app.delete('/api/docs/:docId/webhooks/:webhookId', isOwner,
|
2023-11-14 13:46:33 +00:00
|
|
|
withDocTriggersLock(removeWebhook)
|
(core) API reworked to use POST to create webhook and DELET to remove it
Summary:
introduces POST /api/docs/{docId}/webhooks and DELETE /api/docs/{docId}/webhooks/{webhookId} on place of old _subscribe and _unsubscribe endpoints.
Remove checking for unsubscribeKey while deleting webhook - only owner can delete webhook using DELETE endpoint. subscription key is still needed for _unsubscribe endpoint.
old _unsubscribe and _subscribe endpoints are still active and work as before - no changes there.
Posting schema:
```
POST /api/docs/[docId]/webhooks
```
Request Body:
```
{
"webhooks": [
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb1",
"eventTypes": [
"add",
"update"
],
"enabled": true,
"name": "WebhookName",
"memo": "just a text",
"tableId": "Table1"
}
},
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb2",
"eventTypes": [
"add",
],
"enabled": true,
"name": "OtherWebhookName",
"memo": "just a text",
"tableId": "Table1"
}
}
]
}
```
Expected response: WebhookId for each webhook posted:
```
{
"webhooks": [
{
"id": "85c77108-f1e1-4217-a50d-acd1c5996da2"
},
{
"id": "d87a6402-cfd7-4822-878c-657308fcc8c3"
}
]
}
```
Deleting webhooks:
```
DELETE api/docs/[docId]/webhooks/[webhookId]
```
there is no payload in DELETE request. Therefore only one webhook can be deleted at once
Response:
```
{
"success": true
}
```
Test Plan: Old unit test improved to handle new endpoints, and one more added to check if endpoints are in fact created/removed
Reviewers: alexmojaki
Reviewed By: alexmojaki
Subscribers: paulfitz, alexmojaki
Differential Revision: https://phab.getgrist.com/D3916
2023-07-14 10:05:22 +00:00
|
|
|
);
|
|
|
|
|
|
|
|
/**
|
|
|
|
@deprecated please call to DEL /webhooks instead, this endpoint is only for sake of backward compatibility
|
|
|
|
*/
|
|
|
|
this._app.post('/api/docs/:docId/tables/:tableId/_unsubscribe', canEdit,
|
2023-11-14 13:46:33 +00:00
|
|
|
withDocTriggersLock(removeWebhook)
|
(core) API reworked to use POST to create webhook and DELET to remove it
Summary:
introduces POST /api/docs/{docId}/webhooks and DELETE /api/docs/{docId}/webhooks/{webhookId} on place of old _subscribe and _unsubscribe endpoints.
Remove checking for unsubscribeKey while deleting webhook - only owner can delete webhook using DELETE endpoint. subscription key is still needed for _unsubscribe endpoint.
old _unsubscribe and _subscribe endpoints are still active and work as before - no changes there.
Posting schema:
```
POST /api/docs/[docId]/webhooks
```
Request Body:
```
{
"webhooks": [
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb1",
"eventTypes": [
"add",
"update"
],
"enabled": true,
"name": "WebhookName",
"memo": "just a text",
"tableId": "Table1"
}
},
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb2",
"eventTypes": [
"add",
],
"enabled": true,
"name": "OtherWebhookName",
"memo": "just a text",
"tableId": "Table1"
}
}
]
}
```
Expected response: WebhookId for each webhook posted:
```
{
"webhooks": [
{
"id": "85c77108-f1e1-4217-a50d-acd1c5996da2"
},
{
"id": "d87a6402-cfd7-4822-878c-657308fcc8c3"
}
]
}
```
Deleting webhooks:
```
DELETE api/docs/[docId]/webhooks/[webhookId]
```
there is no payload in DELETE request. Therefore only one webhook can be deleted at once
Response:
```
{
"success": true
}
```
Test Plan: Old unit test improved to handle new endpoints, and one more added to check if endpoints are in fact created/removed
Reviewers: alexmojaki
Reviewed By: alexmojaki
Subscribers: paulfitz, alexmojaki
Differential Revision: https://phab.getgrist.com/D3916
2023-07-14 10:05:22 +00:00
|
|
|
);
|
|
|
|
|
2023-05-08 22:06:24 +00:00
|
|
|
// Update a webhook
|
2023-03-01 20:43:22 +00:00
|
|
|
this._app.patch(
|
2023-11-14 13:46:33 +00:00
|
|
|
'/api/docs/:docId/webhooks/:webhookId', isOwner, validate(WebhookPatch),
|
|
|
|
withDocTriggersLock(async (activeDoc, req, res) => {
|
2023-03-01 20:43:22 +00:00
|
|
|
|
|
|
|
const docId = activeDoc.docName;
|
|
|
|
const webhookId = req.params.webhookId;
|
2023-11-14 13:46:33 +00:00
|
|
|
const {fields, url} = await getWebhookSettings(activeDoc, req, webhookId, req.body);
|
2023-10-31 13:07:02 +00:00
|
|
|
if (fields.enabled === false) {
|
|
|
|
await activeDoc.triggers.clearSingleWebhookQueue(webhookId);
|
|
|
|
}
|
|
|
|
|
2023-03-01 20:43:22 +00:00
|
|
|
const triggerRowId = activeDoc.triggers.getWebhookTriggerRecord(webhookId).id;
|
|
|
|
|
2023-11-14 13:46:33 +00:00
|
|
|
// update url in homedb
|
|
|
|
if (url) {
|
|
|
|
await this._dbManager.updateWebhookUrl(webhookId, docId, url);
|
|
|
|
activeDoc.triggers.webhookDeleted(webhookId); // clear cache
|
|
|
|
}
|
2023-03-01 20:43:22 +00:00
|
|
|
|
2023-11-14 13:46:33 +00:00
|
|
|
// then update document
|
|
|
|
if (Object.keys(fields).length) {
|
|
|
|
await handleSandboxError("_grist_Triggers", [], activeDoc.applyUserActions(
|
|
|
|
docSessionFromRequest(req),
|
|
|
|
[['UpdateRecord', "_grist_Triggers", triggerRowId, fields]]));
|
|
|
|
}
|
2023-03-01 20:43:22 +00:00
|
|
|
|
2023-05-08 22:06:24 +00:00
|
|
|
await activeDoc.sendWebhookNotification();
|
|
|
|
|
2023-03-01 20:43:22 +00:00
|
|
|
res.json({success: true});
|
|
|
|
})
|
|
|
|
);
|
|
|
|
|
2023-07-18 07:24:10 +00:00
|
|
|
// Clears a single webhook in the queue for this document.
|
|
|
|
this._app.delete('/api/docs/:docId/webhooks/queue/:webhookId', isOwner,
|
2023-11-14 13:46:33 +00:00
|
|
|
withDocTriggersLock(async (activeDoc, req, res) => {
|
2023-07-18 07:24:10 +00:00
|
|
|
const webhookId = req.params.webhookId;
|
|
|
|
await activeDoc.clearSingleWebhookQueue(webhookId);
|
|
|
|
await activeDoc.sendWebhookNotification();
|
|
|
|
res.json({success: true});
|
|
|
|
})
|
|
|
|
);
|
|
|
|
|
(core) Adding /webhooks endpoint
Summary:
- New /webhooks event that lists all webhooks in a document (available for owners),
- Monitoring webhooks usage and saving it in memory or Redis,
- Loosening _usubscribe API endpoint, so that the information returned from the /webhook endpoint is enough to unsubscribe,
- Owners can remove webhook without the unsubscribe key.
The endpoint lists all webhooks that are registered in a document, not just webhooks from a single table.
There are two status fields. First for the webhook, second for the last request attempt.
Webhook can have 5 statuses: 'idle', 'sending', 'retrying', 'postponed', 'error', which roughly describes what the
sendLoop is currently doing. The 'error' status describes a situation when all request attempts failed and the queue needs
to be drained, so some requests were dropped.
The last request status can only be: 'success', 'failure' or 'rejected'. Rejected means that the last batch was dropped because the
queue was too long.
Test Plan: New and updated tests
Reviewers: paulfitz
Reviewed By: paulfitz
Differential Revision: https://phab.getgrist.com/D3727
2022-12-13 11:47:50 +00:00
|
|
|
// Lists all webhooks and their current status in the document.
|
|
|
|
this._app.get('/api/docs/:docId/webhooks', isOwner,
|
2023-11-14 13:46:33 +00:00
|
|
|
withDocTriggersLock(async (activeDoc, req, res) => {
|
(core) Adding /webhooks endpoint
Summary:
- New /webhooks event that lists all webhooks in a document (available for owners),
- Monitoring webhooks usage and saving it in memory or Redis,
- Loosening _usubscribe API endpoint, so that the information returned from the /webhook endpoint is enough to unsubscribe,
- Owners can remove webhook without the unsubscribe key.
The endpoint lists all webhooks that are registered in a document, not just webhooks from a single table.
There are two status fields. First for the webhook, second for the last request attempt.
Webhook can have 5 statuses: 'idle', 'sending', 'retrying', 'postponed', 'error', which roughly describes what the
sendLoop is currently doing. The 'error' status describes a situation when all request attempts failed and the queue needs
to be drained, so some requests were dropped.
The last request status can only be: 'success', 'failure' or 'rejected'. Rejected means that the last batch was dropped because the
queue was too long.
Test Plan: New and updated tests
Reviewers: paulfitz
Reviewed By: paulfitz
Differential Revision: https://phab.getgrist.com/D3727
2022-12-13 11:47:50 +00:00
|
|
|
res.json(await activeDoc.webhooksSummary());
|
|
|
|
})
|
|
|
|
);
|
|
|
|
|
2020-07-21 13:20:51 +00:00
|
|
|
// Reload a document forcibly (in fact this closes the doc, it will be automatically
|
|
|
|
// reopened on use).
|
2020-12-14 17:42:09 +00:00
|
|
|
this._app.post('/api/docs/:docId/force-reload', canEdit, throttled(async (req, res) => {
|
|
|
|
const activeDoc = await this._getActiveDoc(req);
|
2020-07-21 13:20:51 +00:00
|
|
|
await activeDoc.reloadDoc();
|
|
|
|
res.json(null);
|
|
|
|
}));
|
|
|
|
|
2020-12-14 17:42:09 +00:00
|
|
|
this._app.post('/api/docs/:docId/recover', canEdit, throttled(async (req, res) => {
|
|
|
|
const recoveryModeRaw = req.body.recoveryMode;
|
|
|
|
const recoveryMode = (typeof recoveryModeRaw === 'boolean') ? recoveryModeRaw : undefined;
|
2021-04-26 21:54:09 +00:00
|
|
|
if (!await this._isOwner(req)) { throw new Error('Only owners can control recovery mode'); }
|
2022-05-18 16:05:37 +00:00
|
|
|
this._docManager.setRecovery(getDocId(req), recoveryMode ?? true);
|
2020-12-14 17:42:09 +00:00
|
|
|
const activeDoc = await this._docManager.fetchDoc(docSessionFromRequest(req), getDocId(req), recoveryMode);
|
|
|
|
res.json({
|
|
|
|
recoveryMode: activeDoc.recoveryMode
|
|
|
|
});
|
|
|
|
}));
|
|
|
|
|
2020-07-21 13:20:51 +00:00
|
|
|
// DELETE /api/docs/:docId
|
|
|
|
// Delete the specified doc.
|
|
|
|
this._app.delete('/api/docs/:docId', canEditMaybeRemoved, throttled(async (req, res) => {
|
|
|
|
await this._removeDoc(req, res, true);
|
|
|
|
}));
|
|
|
|
|
|
|
|
// POST /api/docs/:docId/remove
|
|
|
|
// Soft-delete the specified doc. If query parameter "permanent" is set,
|
|
|
|
// delete permanently.
|
|
|
|
this._app.post('/api/docs/:docId/remove', canEditMaybeRemoved, throttled(async (req, res) => {
|
|
|
|
await this._removeDoc(req, res, isParameterOn(req.query.permanent));
|
|
|
|
}));
|
|
|
|
|
|
|
|
this._app.get('/api/docs/:docId/snapshots', canView, withDoc(async (activeDoc, req, res) => {
|
2022-11-15 15:58:25 +00:00
|
|
|
const docSession = docSessionFromRequest(req);
|
|
|
|
const {snapshots} = await activeDoc.getSnapshots(docSession, isAffirmative(req.query.raw));
|
2020-07-21 13:20:51 +00:00
|
|
|
res.json({snapshots});
|
|
|
|
}));
|
|
|
|
|
2023-01-03 10:52:25 +00:00
|
|
|
this._app.get('/api/docs/:docId/usersForViewAs', isOwner, withDoc(async (activeDoc, req, res) => {
|
|
|
|
const docSession = docSessionFromRequest(req);
|
|
|
|
res.json(await activeDoc.getUsersForViewAs(docSession));
|
|
|
|
}));
|
|
|
|
|
2020-12-18 17:37:16 +00:00
|
|
|
this._app.post('/api/docs/:docId/snapshots/remove', isOwner, withDoc(async (activeDoc, req, res) => {
|
|
|
|
const docSession = docSessionFromRequest(req);
|
|
|
|
const snapshotIds = req.body.snapshotIds as string[];
|
|
|
|
if (snapshotIds) {
|
|
|
|
await activeDoc.removeSnapshots(docSession, snapshotIds);
|
|
|
|
res.json({snapshotIds});
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
if (req.body.select === 'unlisted') {
|
|
|
|
// Remove any snapshots not listed in inventory. Ideally, there should be no
|
2021-10-15 09:31:13 +00:00
|
|
|
// snapshots, and this undocumented feature is just for fixing up problems.
|
2022-11-15 15:58:25 +00:00
|
|
|
const full = (await activeDoc.getSnapshots(docSession, true)).snapshots.map(s => s.snapshotId);
|
|
|
|
const listed = new Set((await activeDoc.getSnapshots(docSession)).snapshots.map(s => s.snapshotId));
|
2020-12-18 17:37:16 +00:00
|
|
|
const unlisted = full.filter(snapshotId => !listed.has(snapshotId));
|
|
|
|
await activeDoc.removeSnapshots(docSession, unlisted);
|
|
|
|
res.json({snapshotIds: unlisted});
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
if (req.body.select === 'past') {
|
|
|
|
// Remove all but the latest snapshot. Useful for sanitizing history if something
|
|
|
|
// bad snuck into previous snapshots and they are not valuable to preserve.
|
2022-11-15 15:58:25 +00:00
|
|
|
const past = (await activeDoc.getSnapshots(docSession, true)).snapshots.map(s => s.snapshotId);
|
2020-12-18 17:37:16 +00:00
|
|
|
past.shift(); // remove current version.
|
|
|
|
await activeDoc.removeSnapshots(docSession, past);
|
|
|
|
res.json({snapshotIds: past});
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
throw new Error('please specify snapshotIds to remove');
|
|
|
|
}));
|
|
|
|
|
2020-07-21 13:20:51 +00:00
|
|
|
this._app.post('/api/docs/:docId/flush', canEdit, throttled(async (req, res) => {
|
|
|
|
const activeDocPromise = this._getActiveDocIfAvailable(req);
|
|
|
|
if (!activeDocPromise) {
|
|
|
|
// Only need to flush if doc is actually open.
|
|
|
|
res.json(false);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
const activeDoc = await activeDocPromise;
|
|
|
|
await activeDoc.flushDoc();
|
|
|
|
res.json(true);
|
|
|
|
}));
|
|
|
|
|
(core) support GRIST_WORKER_GROUP to place worker into an exclusive group
Summary:
In an emergency, we may want to serve certain documents with "old" workers as we fix problems. This diff adds some support for that.
* Creates duplicate task definitions and services for staging and production doc workers (called grist-docs-staging2 and grist-docs-prod2), pulling from distinct docker tags (staging2 and prod2). The services are set to have zero workers until we need them.
* These new workers are started with a new env variable `GRIST_WORKER_GROUP` set to `secondary`.
* The `GRIST_WORKER_GROUP` variable, if set, makes the worker available to documents in the named group, and only that group.
* An unauthenticated `/assign` endpoint is added to documents which, when POSTed to, checks that the doc is served by a worker in the desired group for that doc (as set manually in redis), and if not frees the doc up for reassignment. This makes it possible to move individual docs between workers without redeployments.
The bash scripts added are a record of how the task definitions + services were created. The services could just have been copied manually, but the task definitions will need to be updated whenever the definitions for the main doc workers are updated, so it is worth scripting that.
For example, if a certain document were to fail on a new deployment of Grist, but rolling back the full deployment wasn't practical:
* Set prod2 tag in docker to desired codebase for that document
* Set desired_count for grist-docs-prod2 service to non-zero
* Set doc-<docid>-group for that doc in redis to secondary
* Hit /api/docs/<docid>/assign to move the doc to grist-docs-prod2
(If the document needs to be reverted to a previous snapshot, that currently would need doing manually - could be made simpler, but not in scope of this diff).
Test Plan: added tests
Reviewers: dsagal
Reviewed By: dsagal
Differential Revision: https://phab.getgrist.com/D2649
2020-11-02 19:24:46 +00:00
|
|
|
// Administrative endpoint, that checks if a document is in the expected group,
|
|
|
|
// and frees it for reassignment if not. Has no effect if document is in the
|
|
|
|
// expected group. Does not require specific rights. Returns true if the document
|
|
|
|
// is freed up for reassignment, otherwise false.
|
2022-08-09 15:50:18 +00:00
|
|
|
//
|
|
|
|
// Optionally accepts a `group` query param for updating the document's group prior
|
2022-08-15 19:52:38 +00:00
|
|
|
// to (possible) reassignment. A blank string unsets the current group, if any.
|
|
|
|
// (Requires a special permit.)
|
2021-11-04 16:25:42 +00:00
|
|
|
this._app.post('/api/docs/:docId/assign', canEdit, throttled(async (req, res) => {
|
(core) support GRIST_WORKER_GROUP to place worker into an exclusive group
Summary:
In an emergency, we may want to serve certain documents with "old" workers as we fix problems. This diff adds some support for that.
* Creates duplicate task definitions and services for staging and production doc workers (called grist-docs-staging2 and grist-docs-prod2), pulling from distinct docker tags (staging2 and prod2). The services are set to have zero workers until we need them.
* These new workers are started with a new env variable `GRIST_WORKER_GROUP` set to `secondary`.
* The `GRIST_WORKER_GROUP` variable, if set, makes the worker available to documents in the named group, and only that group.
* An unauthenticated `/assign` endpoint is added to documents which, when POSTed to, checks that the doc is served by a worker in the desired group for that doc (as set manually in redis), and if not frees the doc up for reassignment. This makes it possible to move individual docs between workers without redeployments.
The bash scripts added are a record of how the task definitions + services were created. The services could just have been copied manually, but the task definitions will need to be updated whenever the definitions for the main doc workers are updated, so it is worth scripting that.
For example, if a certain document were to fail on a new deployment of Grist, but rolling back the full deployment wasn't practical:
* Set prod2 tag in docker to desired codebase for that document
* Set desired_count for grist-docs-prod2 service to non-zero
* Set doc-<docid>-group for that doc in redis to secondary
* Hit /api/docs/<docid>/assign to move the doc to grist-docs-prod2
(If the document needs to be reverted to a previous snapshot, that currently would need doing manually - could be made simpler, but not in scope of this diff).
Test Plan: added tests
Reviewers: dsagal
Reviewed By: dsagal
Differential Revision: https://phab.getgrist.com/D2649
2020-11-02 19:24:46 +00:00
|
|
|
const docId = getDocId(req);
|
2023-09-05 18:27:35 +00:00
|
|
|
const group = optStringParam(req.query.group, 'group');
|
2022-08-15 19:52:38 +00:00
|
|
|
if (group !== undefined && req.specialPermit?.action === 'assign-doc') {
|
|
|
|
if (group.trim() === '') {
|
|
|
|
await this._docWorkerMap.removeDocGroup(docId);
|
|
|
|
} else {
|
|
|
|
await this._docWorkerMap.updateDocGroup(docId, group);
|
|
|
|
}
|
2022-08-09 15:50:18 +00:00
|
|
|
}
|
(core) support GRIST_WORKER_GROUP to place worker into an exclusive group
Summary:
In an emergency, we may want to serve certain documents with "old" workers as we fix problems. This diff adds some support for that.
* Creates duplicate task definitions and services for staging and production doc workers (called grist-docs-staging2 and grist-docs-prod2), pulling from distinct docker tags (staging2 and prod2). The services are set to have zero workers until we need them.
* These new workers are started with a new env variable `GRIST_WORKER_GROUP` set to `secondary`.
* The `GRIST_WORKER_GROUP` variable, if set, makes the worker available to documents in the named group, and only that group.
* An unauthenticated `/assign` endpoint is added to documents which, when POSTed to, checks that the doc is served by a worker in the desired group for that doc (as set manually in redis), and if not frees the doc up for reassignment. This makes it possible to move individual docs between workers without redeployments.
The bash scripts added are a record of how the task definitions + services were created. The services could just have been copied manually, but the task definitions will need to be updated whenever the definitions for the main doc workers are updated, so it is worth scripting that.
For example, if a certain document were to fail on a new deployment of Grist, but rolling back the full deployment wasn't practical:
* Set prod2 tag in docker to desired codebase for that document
* Set desired_count for grist-docs-prod2 service to non-zero
* Set doc-<docid>-group for that doc in redis to secondary
* Hit /api/docs/<docid>/assign to move the doc to grist-docs-prod2
(If the document needs to be reverted to a previous snapshot, that currently would need doing manually - could be made simpler, but not in scope of this diff).
Test Plan: added tests
Reviewers: dsagal
Reviewed By: dsagal
Differential Revision: https://phab.getgrist.com/D2649
2020-11-02 19:24:46 +00:00
|
|
|
const status = await this._docWorkerMap.getDocWorker(docId);
|
|
|
|
if (!status) { res.json(false); return; }
|
|
|
|
const workerGroup = await this._docWorkerMap.getWorkerGroup(status.docWorker.id);
|
|
|
|
const docGroup = await this._docWorkerMap.getDocGroup(docId);
|
|
|
|
if (docGroup === workerGroup) { res.json(false); return; }
|
|
|
|
const activeDoc = await this._getActiveDoc(req);
|
|
|
|
await activeDoc.flushDoc();
|
|
|
|
// flushDoc terminates once there's no pending operation on the document.
|
2021-10-15 09:31:13 +00:00
|
|
|
// There could still be async operations in progress. We mute their effect,
|
(core) support GRIST_WORKER_GROUP to place worker into an exclusive group
Summary:
In an emergency, we may want to serve certain documents with "old" workers as we fix problems. This diff adds some support for that.
* Creates duplicate task definitions and services for staging and production doc workers (called grist-docs-staging2 and grist-docs-prod2), pulling from distinct docker tags (staging2 and prod2). The services are set to have zero workers until we need them.
* These new workers are started with a new env variable `GRIST_WORKER_GROUP` set to `secondary`.
* The `GRIST_WORKER_GROUP` variable, if set, makes the worker available to documents in the named group, and only that group.
* An unauthenticated `/assign` endpoint is added to documents which, when POSTed to, checks that the doc is served by a worker in the desired group for that doc (as set manually in redis), and if not frees the doc up for reassignment. This makes it possible to move individual docs between workers without redeployments.
The bash scripts added are a record of how the task definitions + services were created. The services could just have been copied manually, but the task definitions will need to be updated whenever the definitions for the main doc workers are updated, so it is worth scripting that.
For example, if a certain document were to fail on a new deployment of Grist, but rolling back the full deployment wasn't practical:
* Set prod2 tag in docker to desired codebase for that document
* Set desired_count for grist-docs-prod2 service to non-zero
* Set doc-<docid>-group for that doc in redis to secondary
* Hit /api/docs/<docid>/assign to move the doc to grist-docs-prod2
(If the document needs to be reverted to a previous snapshot, that currently would need doing manually - could be made simpler, but not in scope of this diff).
Test Plan: added tests
Reviewers: dsagal
Reviewed By: dsagal
Differential Revision: https://phab.getgrist.com/D2649
2020-11-02 19:24:46 +00:00
|
|
|
// as if they never happened.
|
|
|
|
activeDoc.docClients.interruptAllClients();
|
|
|
|
activeDoc.setMuted();
|
|
|
|
await activeDoc.shutdown();
|
|
|
|
await this._docWorkerMap.releaseAssignment(status.docWorker.id, docId);
|
|
|
|
res.json(true);
|
|
|
|
}));
|
|
|
|
|
2020-07-21 13:20:51 +00:00
|
|
|
// This endpoint cannot use withDoc since it is expected behavior for the ActiveDoc it
|
|
|
|
// starts with to become muted.
|
|
|
|
this._app.post('/api/docs/:docId/replace', canEdit, throttled(async (req, res) => {
|
2023-04-06 15:10:29 +00:00
|
|
|
const docSession = docSessionFromRequest(req);
|
2020-07-21 13:20:51 +00:00
|
|
|
const activeDoc = await this._getActiveDoc(req);
|
|
|
|
const options: DocReplacementOptions = {};
|
|
|
|
if (req.body.sourceDocId) {
|
|
|
|
options.sourceDocId = await this._confirmDocIdForRead(req, String(req.body.sourceDocId));
|
2022-11-09 16:49:23 +00:00
|
|
|
// Make sure that if we wanted to download the full source, we would be allowed.
|
|
|
|
const result = await fetch(this._grist.getHomeUrl(req, `/api/docs/${options.sourceDocId}/download?dryrun=1`), {
|
|
|
|
method: 'GET',
|
|
|
|
headers: {
|
|
|
|
...getTransitiveHeaders(req),
|
|
|
|
'Content-Type': 'application/json',
|
|
|
|
}
|
|
|
|
});
|
|
|
|
if (result.status !== 200) {
|
|
|
|
const jsonResult = await result.json();
|
|
|
|
throw new ApiError(jsonResult.error, result.status);
|
|
|
|
}
|
2020-07-21 13:20:51 +00:00
|
|
|
// We should make sure the source document has flushed recently.
|
|
|
|
// It may not be served by the same worker, so work through the api.
|
|
|
|
await fetch(this._grist.getHomeUrl(req, `/api/docs/${options.sourceDocId}/flush`), {
|
|
|
|
method: 'POST',
|
|
|
|
headers: {
|
|
|
|
...getTransitiveHeaders(req),
|
|
|
|
'Content-Type': 'application/json',
|
|
|
|
}
|
|
|
|
});
|
2023-03-22 13:48:50 +00:00
|
|
|
if (req.body.resetTutorialMetadata) {
|
|
|
|
const scope = getDocScope(req);
|
|
|
|
const tutorialTrunkId = options.sourceDocId;
|
|
|
|
await this._dbManager.connection.transaction(async (manager) => {
|
|
|
|
// Fetch the tutorial trunk doc so we can replace the tutorial doc's name.
|
2023-11-01 13:54:19 +00:00
|
|
|
const tutorialTrunk = await this._dbManager.getDoc({...scope, urlId: tutorialTrunkId}, manager);
|
2023-03-22 13:48:50 +00:00
|
|
|
await this._dbManager.updateDocument(
|
|
|
|
scope,
|
|
|
|
{
|
|
|
|
name: tutorialTrunk.name,
|
|
|
|
options: {
|
|
|
|
tutorial: {
|
2023-06-09 16:32:40 +00:00
|
|
|
...tutorialTrunk.options?.tutorial,
|
2023-03-22 13:48:50 +00:00
|
|
|
// For now, the only state we need to reset is the slide position.
|
|
|
|
lastSlideIndex: 0,
|
|
|
|
},
|
|
|
|
},
|
|
|
|
},
|
|
|
|
manager
|
|
|
|
);
|
|
|
|
});
|
2023-04-06 15:10:29 +00:00
|
|
|
const {forkId} = parseUrlId(scope.urlId);
|
|
|
|
activeDoc.logTelemetryEvent(docSession, 'tutorialRestarted', {
|
2023-06-06 17:08:50 +00:00
|
|
|
full: {
|
2023-07-04 21:21:34 +00:00
|
|
|
tutorialForkIdDigest: forkId,
|
|
|
|
tutorialTrunkIdDigest: tutorialTrunkId,
|
2023-06-06 17:08:50 +00:00
|
|
|
},
|
2023-04-06 15:10:29 +00:00
|
|
|
});
|
2023-03-22 13:48:50 +00:00
|
|
|
}
|
2020-07-21 13:20:51 +00:00
|
|
|
}
|
|
|
|
if (req.body.snapshotId) {
|
|
|
|
options.snapshotId = String(req.body.snapshotId);
|
|
|
|
}
|
2022-11-09 16:49:23 +00:00
|
|
|
await activeDoc.replace(docSession, options);
|
2020-07-21 13:20:51 +00:00
|
|
|
res.json(null);
|
|
|
|
}));
|
|
|
|
|
|
|
|
this._app.get('/api/docs/:docId/states', canView, withDoc(async (activeDoc, req, res) => {
|
2020-09-11 20:27:09 +00:00
|
|
|
const docSession = docSessionFromRequest(req);
|
|
|
|
res.json(await this._getStates(docSession, activeDoc));
|
2020-07-21 13:20:51 +00:00
|
|
|
}));
|
|
|
|
|
2020-12-18 17:37:16 +00:00
|
|
|
this._app.post('/api/docs/:docId/states/remove', isOwner, withDoc(async (activeDoc, req, res) => {
|
|
|
|
const docSession = docSessionFromRequest(req);
|
2021-11-29 20:12:45 +00:00
|
|
|
const keep = integerParam(req.body.keep, 'keep');
|
2020-12-18 17:37:16 +00:00
|
|
|
res.json(await activeDoc.deleteActions(docSession, keep));
|
|
|
|
}));
|
|
|
|
|
2020-07-21 13:20:51 +00:00
|
|
|
this._app.get('/api/docs/:docId/compare/:docId2', canView, withDoc(async (activeDoc, req, res) => {
|
2020-09-18 18:43:01 +00:00
|
|
|
const showDetails = isAffirmative(req.query.detail);
|
2020-09-11 20:27:09 +00:00
|
|
|
const docSession = docSessionFromRequest(req);
|
|
|
|
const {states} = await this._getStates(docSession, activeDoc);
|
2020-07-21 13:20:51 +00:00
|
|
|
const ref = await fetch(this._grist.getHomeUrl(req, `/api/docs/${req.params.docId2}/states`), {
|
|
|
|
headers: {
|
|
|
|
...getTransitiveHeaders(req),
|
|
|
|
'Content-Type': 'application/json',
|
|
|
|
}
|
|
|
|
});
|
|
|
|
const states2: DocState[] = (await ref.json()).states;
|
|
|
|
const left = states[0];
|
|
|
|
const right = states2[0];
|
|
|
|
if (!left || !right) {
|
|
|
|
// This should not arise unless there's a bug.
|
|
|
|
throw new Error('document with no history');
|
|
|
|
}
|
|
|
|
const rightHashes = new Set(states2.map(state => state.h));
|
|
|
|
const parent = states.find(state => rightHashes.has(state.h )) || null;
|
|
|
|
const leftChanged = parent && parent.h !== left.h;
|
|
|
|
const rightChanged = parent && parent.h !== right.h;
|
|
|
|
const summary = leftChanged ? (rightChanged ? 'both' : 'left') :
|
|
|
|
(rightChanged ? 'right' : (parent ? 'same' : 'unrelated'));
|
|
|
|
const comparison: DocStateComparison = {
|
|
|
|
left, right, parent, summary
|
|
|
|
};
|
2020-09-18 18:43:01 +00:00
|
|
|
if (showDetails && parent) {
|
|
|
|
// Calculate changes from the parent to the current version of this document.
|
|
|
|
const leftChanges = (await this._getChanges(docSession, activeDoc, states, parent.h,
|
|
|
|
'HEAD')).details!.rightChanges;
|
|
|
|
|
|
|
|
// Calculate changes from the (common) parent to the current version of the other document.
|
|
|
|
const url = `/api/docs/${req.params.docId2}/compare?left=${parent.h}`;
|
|
|
|
const rightChangesReq = await fetch(this._grist.getHomeUrl(req, url), {
|
|
|
|
headers: {
|
|
|
|
...getTransitiveHeaders(req),
|
|
|
|
'Content-Type': 'application/json',
|
|
|
|
}
|
|
|
|
});
|
|
|
|
const rightChanges = (await rightChangesReq.json()).details!.rightChanges;
|
|
|
|
|
|
|
|
// Add the left and right changes as details to the result.
|
|
|
|
comparison.details = { leftChanges, rightChanges };
|
|
|
|
}
|
2020-07-21 13:20:51 +00:00
|
|
|
res.json(comparison);
|
|
|
|
}));
|
|
|
|
|
2020-09-18 18:43:01 +00:00
|
|
|
// Give details about what changed between two versions of a document.
|
|
|
|
this._app.get('/api/docs/:docId/compare', canView, withDoc(async (activeDoc, req, res) => {
|
|
|
|
// This could be a relatively slow operation if actions are large.
|
2021-11-29 20:12:45 +00:00
|
|
|
const left = stringParam(req.query.left || 'HEAD', 'left');
|
|
|
|
const right = stringParam(req.query.right || 'HEAD', 'right');
|
2020-09-18 18:43:01 +00:00
|
|
|
const docSession = docSessionFromRequest(req);
|
|
|
|
const {states} = await this._getStates(docSession, activeDoc);
|
|
|
|
res.json(await this._getChanges(docSession, activeDoc, states, left, right));
|
|
|
|
}));
|
|
|
|
|
2020-07-21 13:20:51 +00:00
|
|
|
// Do an import targeted at a specific workspace. Although the URL fits ApiServer, this
|
|
|
|
// endpoint is handled only by DocWorker, so is handled here. (Note: this does not handle
|
|
|
|
// actual file uploads, so no worries here about large request bodies.)
|
|
|
|
this._app.post('/api/workspaces/:wid/import', expressWrap(async (req, res) => {
|
2023-09-13 04:33:32 +00:00
|
|
|
const mreq = req as RequestWithLogin;
|
2020-07-21 13:20:51 +00:00
|
|
|
const userId = getUserId(req);
|
2021-11-29 20:12:45 +00:00
|
|
|
const wsId = integerParam(req.params.wid, 'wid');
|
|
|
|
const uploadId = integerParam(req.body.uploadId, 'uploadId');
|
2023-11-01 13:54:19 +00:00
|
|
|
const result = await this._docManager.importDocToWorkspace(mreq, {
|
2023-09-05 18:27:35 +00:00
|
|
|
userId,
|
|
|
|
uploadId,
|
|
|
|
workspaceId: wsId,
|
|
|
|
browserSettings: req.body.browserSettings,
|
2023-09-13 04:33:32 +00:00
|
|
|
telemetryMetadata: {
|
|
|
|
limited: {
|
|
|
|
isImport: true,
|
|
|
|
sourceDocIdDigest: undefined,
|
|
|
|
},
|
|
|
|
full: {
|
|
|
|
userId: mreq.userId,
|
|
|
|
altSessionId: mreq.altSessionId,
|
|
|
|
},
|
|
|
|
},
|
2023-09-05 18:27:35 +00:00
|
|
|
});
|
2023-11-15 20:20:51 +00:00
|
|
|
this._logCreatedFileImportDocTelemetryEvent(req, {
|
|
|
|
full: {
|
|
|
|
docIdDigest: result.id,
|
|
|
|
},
|
|
|
|
});
|
2020-07-21 13:20:51 +00:00
|
|
|
res.json(result);
|
|
|
|
}));
|
|
|
|
|
2023-03-16 21:37:24 +00:00
|
|
|
this._app.get('/api/docs/:docId/download/table-schema', canView, withDoc(async (activeDoc, req, res) => {
|
|
|
|
const doc = await this._dbManager.getDoc(req);
|
2024-03-06 17:12:42 +00:00
|
|
|
const options = await this._getDownloadOptions(req, doc);
|
2023-03-16 21:37:24 +00:00
|
|
|
const tableSchema = await collectTableSchemaInFrictionlessFormat(activeDoc, req, options);
|
|
|
|
const apiPath = await this._grist.getResourceUrl(doc, 'api');
|
|
|
|
const query = new URLSearchParams(req.query as {[key: string]: string});
|
|
|
|
const tableSchemaPath = `${apiPath}/download/csv?${query.toString()}`;
|
|
|
|
res.send({
|
|
|
|
format: "csv",
|
|
|
|
mediatype: "text/csv",
|
|
|
|
encoding: "utf-8",
|
|
|
|
path: tableSchemaPath,
|
|
|
|
dialect: {
|
|
|
|
delimiter: ",",
|
|
|
|
doubleQuote: true,
|
|
|
|
},
|
|
|
|
...tableSchema,
|
|
|
|
});
|
|
|
|
}));
|
|
|
|
|
2021-09-01 21:07:53 +00:00
|
|
|
this._app.get('/api/docs/:docId/download/csv', canView, withDoc(async (activeDoc, req, res) => {
|
2024-03-06 17:12:42 +00:00
|
|
|
const options = await this._getDownloadOptions(req);
|
2021-09-01 21:07:53 +00:00
|
|
|
|
2024-03-20 13:58:24 +00:00
|
|
|
await downloadDSV(activeDoc, req, res, {...options, delimiter: ','});
|
|
|
|
}));
|
|
|
|
|
|
|
|
this._app.get('/api/docs/:docId/download/tsv', canView, withDoc(async (activeDoc, req, res) => {
|
|
|
|
const options = await this._getDownloadOptions(req);
|
|
|
|
|
|
|
|
await downloadDSV(activeDoc, req, res, {...options, delimiter: '\t'});
|
|
|
|
}));
|
|
|
|
|
|
|
|
this._app.get('/api/docs/:docId/download/dsv', canView, withDoc(async (activeDoc, req, res) => {
|
|
|
|
const options = await this._getDownloadOptions(req);
|
|
|
|
|
|
|
|
await downloadDSV(activeDoc, req, res, {...options, delimiter: '💩'});
|
2021-09-01 21:07:53 +00:00
|
|
|
}));
|
|
|
|
|
|
|
|
this._app.get('/api/docs/:docId/download/xlsx', canView, withDoc(async (activeDoc, req, res) => {
|
2024-03-06 17:12:42 +00:00
|
|
|
const options: DownloadOptions = (!_.isEmpty(req.query) && !_.isEqual(Object.keys(req.query), ["title"]))
|
|
|
|
? await this._getDownloadOptions(req)
|
|
|
|
: {
|
|
|
|
filename: await this._getDownloadFilename(req),
|
2022-09-14 18:55:44 +00:00
|
|
|
tableId: '',
|
|
|
|
viewSectionId: undefined,
|
|
|
|
filters: [],
|
|
|
|
sortOrder: [],
|
2023-10-16 00:17:43 +00:00
|
|
|
header: 'label'
|
2022-09-14 18:55:44 +00:00
|
|
|
};
|
2021-09-01 21:07:53 +00:00
|
|
|
await downloadXLSX(activeDoc, req, res, options);
|
|
|
|
}));
|
2021-08-30 20:06:40 +00:00
|
|
|
|
2021-07-21 08:46:03 +00:00
|
|
|
this._app.get('/api/docs/:docId/send-to-drive', canView, decodeGoogleToken, withDoc(exportToDrive));
|
|
|
|
|
2023-07-05 15:36:45 +00:00
|
|
|
/**
|
|
|
|
* Send a request to the formula assistant to get completions for a formula. Increases the
|
|
|
|
* usage of the formula assistant for the billing account in case of success.
|
|
|
|
*/
|
|
|
|
this._app.post('/api/docs/:docId/assistant', canView, checkLimit('assistant'),
|
|
|
|
withDoc(async (activeDoc, req, res) => {
|
|
|
|
const docSession = docSessionFromRequest(req);
|
|
|
|
const request = req.body;
|
|
|
|
const result = await sendForCompletion(docSession, activeDoc, request);
|
2023-08-30 15:58:18 +00:00
|
|
|
const limit = await this._increaseLimit('assistant', req);
|
|
|
|
res.json({
|
|
|
|
...result,
|
|
|
|
limit: !limit ? undefined : {
|
|
|
|
usage: limit.usage,
|
|
|
|
limit: limit.limit,
|
|
|
|
},
|
|
|
|
});
|
2023-07-05 15:36:45 +00:00
|
|
|
})
|
|
|
|
);
|
|
|
|
|
2023-09-05 18:27:35 +00:00
|
|
|
/**
|
|
|
|
* Create a document.
|
|
|
|
*
|
|
|
|
* When an upload is included, it is imported as the initial state of the document.
|
2023-09-06 18:35:46 +00:00
|
|
|
*
|
|
|
|
* When a source document id is included, its structure and (optionally) data is
|
|
|
|
* included in the new document.
|
|
|
|
*
|
|
|
|
* In all other cases, the document is left empty.
|
2023-09-05 18:27:35 +00:00
|
|
|
*
|
|
|
|
* If a workspace id is included, the document will be saved there instead of
|
|
|
|
* being left "unsaved".
|
|
|
|
*
|
|
|
|
* Returns the id of the created document.
|
|
|
|
*
|
|
|
|
* TODO: unify this with the other document creation and import endpoints.
|
|
|
|
*/
|
2023-09-08 13:05:52 +00:00
|
|
|
this._app.post('/api/docs', checkAnonymousCreation, expressWrap(async (req, res) => {
|
2023-09-13 04:33:32 +00:00
|
|
|
const mreq = req as RequestWithLogin;
|
2020-07-21 13:20:51 +00:00
|
|
|
const userId = getUserId(req);
|
2023-09-05 18:27:35 +00:00
|
|
|
|
2020-07-21 13:20:51 +00:00
|
|
|
let uploadId: number|undefined;
|
|
|
|
let parameters: {[key: string]: any};
|
|
|
|
if (req.is('multipart/form-data')) {
|
|
|
|
const formResult = await handleOptionalUpload(req, res);
|
|
|
|
if (formResult.upload) {
|
|
|
|
uploadId = formResult.upload.uploadId;
|
|
|
|
}
|
|
|
|
parameters = formResult.parameters || {};
|
|
|
|
} else {
|
|
|
|
parameters = req.body;
|
|
|
|
}
|
2023-09-05 18:27:35 +00:00
|
|
|
|
2023-09-06 18:35:46 +00:00
|
|
|
const sourceDocumentId = optStringParam(parameters.sourceDocumentId, 'sourceDocumentId');
|
2023-09-05 18:27:35 +00:00
|
|
|
const workspaceId = optIntegerParam(parameters.workspaceId, 'workspaceId');
|
2020-07-21 13:20:51 +00:00
|
|
|
const browserSettings: BrowserSettings = {};
|
|
|
|
if (parameters.timezone) { browserSettings.timezone = parameters.timezone; }
|
2021-08-26 16:35:11 +00:00
|
|
|
browserSettings.locale = localeFromRequest(req);
|
2023-09-05 18:27:35 +00:00
|
|
|
|
|
|
|
let docId: string;
|
2023-09-06 18:35:46 +00:00
|
|
|
if (sourceDocumentId !== undefined) {
|
|
|
|
docId = await this._copyDocToWorkspace(req, {
|
|
|
|
userId,
|
|
|
|
sourceDocumentId,
|
|
|
|
workspaceId: integerParam(parameters.workspaceId, 'workspaceId'),
|
|
|
|
documentName: stringParam(parameters.documentName, 'documentName'),
|
|
|
|
asTemplate: optBooleanParam(parameters.asTemplate, 'asTemplate'),
|
|
|
|
});
|
|
|
|
} else if (uploadId !== undefined) {
|
2023-11-01 13:54:19 +00:00
|
|
|
const result = await this._docManager.importDocToWorkspace(mreq, {
|
2023-09-05 18:27:35 +00:00
|
|
|
userId,
|
|
|
|
uploadId,
|
2023-09-06 18:35:46 +00:00
|
|
|
documentName: optStringParam(parameters.documentName, 'documentName'),
|
2023-09-05 18:27:35 +00:00
|
|
|
workspaceId,
|
|
|
|
browserSettings,
|
2023-09-13 04:33:32 +00:00
|
|
|
telemetryMetadata: {
|
|
|
|
limited: {
|
|
|
|
isImport: true,
|
|
|
|
sourceDocIdDigest: undefined,
|
|
|
|
},
|
|
|
|
full: {
|
|
|
|
userId: mreq.userId,
|
|
|
|
altSessionId: mreq.altSessionId,
|
|
|
|
},
|
|
|
|
},
|
2023-09-05 18:27:35 +00:00
|
|
|
});
|
|
|
|
docId = result.id;
|
2023-11-15 20:20:51 +00:00
|
|
|
this._logCreatedFileImportDocTelemetryEvent(req, {
|
|
|
|
full: {
|
|
|
|
docIdDigest: docId,
|
|
|
|
},
|
|
|
|
});
|
2023-09-05 18:27:35 +00:00
|
|
|
} else if (workspaceId !== undefined) {
|
2023-09-06 18:35:46 +00:00
|
|
|
docId = await this._createNewSavedDoc(req, {
|
|
|
|
workspaceId: workspaceId,
|
|
|
|
documentName: optStringParam(parameters.documentName, 'documentName'),
|
2023-09-05 18:27:35 +00:00
|
|
|
});
|
|
|
|
} else {
|
2023-09-06 18:35:46 +00:00
|
|
|
docId = await this._createNewUnsavedDoc(req, {
|
2023-09-05 18:27:35 +00:00
|
|
|
userId,
|
2023-09-06 18:35:46 +00:00
|
|
|
browserSettings,
|
2023-09-05 18:27:35 +00:00
|
|
|
});
|
2020-07-21 13:20:51 +00:00
|
|
|
}
|
2023-09-05 18:27:35 +00:00
|
|
|
|
2020-07-21 13:20:51 +00:00
|
|
|
return res.status(200).json(docId);
|
|
|
|
}));
|
2023-12-12 09:58:20 +00:00
|
|
|
|
2024-01-12 17:35:24 +00:00
|
|
|
/**
|
2024-02-21 19:22:01 +00:00
|
|
|
* Get the specified view section's form data.
|
2024-01-12 17:35:24 +00:00
|
|
|
*/
|
2024-02-21 19:22:01 +00:00
|
|
|
this._app.get('/api/docs/:docId/forms/:vsId', canView,
|
2023-12-12 09:58:20 +00:00
|
|
|
withDoc(async (activeDoc, req, res) => {
|
2024-02-21 19:22:01 +00:00
|
|
|
if (!activeDoc.docData) {
|
|
|
|
throw new ApiError('DocData not available', 500);
|
|
|
|
}
|
|
|
|
|
|
|
|
const sectionId = integerParam(req.params.vsId, 'vsId');
|
2024-01-12 17:35:24 +00:00
|
|
|
const docSession = docSessionFromRequest(req);
|
|
|
|
const linkId = getDocSessionShare(docSession);
|
|
|
|
if (linkId) {
|
|
|
|
/* If accessed via a share, the share's `linkId` will be present and
|
|
|
|
* we'll need to check that the form is in fact published, and that the
|
|
|
|
* share key is associated with the form, before granting access to the
|
|
|
|
* form. */
|
2024-02-21 19:22:01 +00:00
|
|
|
this._assertIsPublishedForm({
|
2024-01-12 17:35:24 +00:00
|
|
|
docData: activeDoc.docData,
|
|
|
|
linkId,
|
|
|
|
sectionId,
|
|
|
|
});
|
|
|
|
}
|
2024-02-21 19:22:01 +00:00
|
|
|
|
|
|
|
const Views_section = activeDoc.docData.getMetaTable('_grist_Views_section');
|
2024-01-18 17:23:50 +00:00
|
|
|
const section = Views_section.getRecord(sectionId);
|
2024-01-12 17:35:24 +00:00
|
|
|
if (!section) {
|
2024-02-21 19:22:01 +00:00
|
|
|
throw new ApiError('Form not found', 404, {code: 'FormNotFound'});
|
2023-12-12 09:58:20 +00:00
|
|
|
}
|
2024-02-21 19:22:01 +00:00
|
|
|
|
|
|
|
const Views_section_field = activeDoc.docData.getMetaTable('_grist_Views_section_field');
|
|
|
|
const Tables_column = activeDoc.docData.getMetaTable('_grist_Tables_column');
|
|
|
|
const fields = Views_section_field
|
|
|
|
.filterRecords({parentId: sectionId})
|
|
|
|
.filter(f => {
|
2024-01-18 17:23:50 +00:00
|
|
|
const col = Tables_column.getRecord(f.colRef);
|
2024-02-21 19:22:01 +00:00
|
|
|
// Formulas and attachments are currently unsupported.
|
2024-03-20 14:51:59 +00:00
|
|
|
return col && !(col.isFormula && col.formula) && col.type !== 'Attachments';
|
2023-12-12 09:58:20 +00:00
|
|
|
});
|
2024-02-21 19:22:01 +00:00
|
|
|
|
|
|
|
let {layoutSpec: formLayoutSpec} = section;
|
|
|
|
if (!formLayoutSpec) {
|
|
|
|
formLayoutSpec = JSON.stringify({
|
2023-12-12 09:58:20 +00:00
|
|
|
type: 'Layout',
|
2024-01-18 17:23:50 +00:00
|
|
|
children: [
|
|
|
|
{type: 'Label'},
|
|
|
|
{type: 'Label'},
|
|
|
|
{
|
|
|
|
type: 'Section',
|
|
|
|
children: [
|
|
|
|
{type: 'Label'},
|
|
|
|
{type: 'Label'},
|
2024-02-21 19:22:01 +00:00
|
|
|
...fields.slice(0, INITIAL_FIELDS_COUNT).map(f => ({
|
|
|
|
type: 'Field',
|
|
|
|
leaf: f.id,
|
|
|
|
})),
|
|
|
|
],
|
|
|
|
},
|
2024-01-18 17:23:50 +00:00
|
|
|
],
|
2024-02-21 19:22:01 +00:00
|
|
|
});
|
2023-12-12 09:58:20 +00:00
|
|
|
}
|
|
|
|
|
2024-02-21 19:22:01 +00:00
|
|
|
// Cache the table reads based on tableId. We are caching only the promise, not the result.
|
2024-04-11 06:50:30 +00:00
|
|
|
const table = _.memoize((tableId: string) =>
|
|
|
|
readTable(req, activeDoc, tableId, {}, {}).then(r => asRecords(r, {includeId: true})));
|
2024-01-18 17:23:50 +00:00
|
|
|
|
2024-02-21 19:22:01 +00:00
|
|
|
const getTableValues = async (tableId: string, colId: string) => {
|
|
|
|
const records = await table(tableId);
|
|
|
|
return records.map(r => [r.id as number, r.fields[colId]] as const);
|
2024-01-18 17:23:50 +00:00
|
|
|
};
|
|
|
|
|
2024-02-21 19:22:01 +00:00
|
|
|
const Tables = activeDoc.docData.getMetaTable('_grist_Tables');
|
2024-01-18 17:23:50 +00:00
|
|
|
|
2024-02-21 19:22:01 +00:00
|
|
|
const getRefTableValues = async (col: MetaRowRecord<'_grist_Tables_column'>) => {
|
2024-04-11 06:50:30 +00:00
|
|
|
const refTableId = getReferencedTableId(col.type);
|
|
|
|
let refColId: string;
|
|
|
|
if (col.visibleCol) {
|
|
|
|
const refCol = Tables_column.getRecord(col.visibleCol);
|
|
|
|
if (!refCol) { return []; }
|
2024-02-21 19:22:01 +00:00
|
|
|
|
2024-04-11 06:50:30 +00:00
|
|
|
refColId = refCol.colId as string;
|
|
|
|
} else {
|
|
|
|
refColId = 'id';
|
|
|
|
}
|
|
|
|
if (!refTableId || typeof refTableId !== 'string' || !refColId) { return []; }
|
2024-02-21 19:22:01 +00:00
|
|
|
|
2024-03-20 14:51:59 +00:00
|
|
|
const values = await getTableValues(refTableId, refColId);
|
|
|
|
return values.filter(([_id, value]) => !isBlankValue(value));
|
2023-12-12 09:58:20 +00:00
|
|
|
};
|
|
|
|
|
2024-02-21 19:22:01 +00:00
|
|
|
const formFields = await Promise.all(fields.map(async (field) => {
|
|
|
|
const col = Tables_column.getRecord(field.colRef);
|
|
|
|
if (!col) { throw new Error(`Column ${field.colRef} not found`); }
|
|
|
|
|
|
|
|
const fieldOptions = safeJsonParse(field.widgetOptions as string, {});
|
|
|
|
const colOptions = safeJsonParse(col.widgetOptions as string, {});
|
|
|
|
const options = {...colOptions, ...fieldOptions};
|
|
|
|
const type = extractTypeFromColType(col.type as string);
|
|
|
|
const colId = col.colId as string;
|
|
|
|
|
|
|
|
return [field.id, {
|
|
|
|
colId,
|
|
|
|
description: fieldOptions.description || col.description,
|
|
|
|
question: options.question || col.label || colId,
|
|
|
|
options,
|
|
|
|
type,
|
|
|
|
refValues: isFullReferencingType(col.type) ? await getRefTableValues(col) : null,
|
|
|
|
}] as const;
|
|
|
|
}));
|
|
|
|
const formFieldsById = Object.fromEntries(formFields);
|
|
|
|
|
|
|
|
const getTableName = () => {
|
|
|
|
const rawSectionRef = Tables.getRecord(section.tableRef)?.rawViewSectionRef;
|
|
|
|
if (!rawSectionRef) { return null; }
|
|
|
|
|
|
|
|
const rawSection = activeDoc.docData!
|
|
|
|
.getMetaTable('_grist_Views_section')
|
|
|
|
.getRecord(rawSectionRef);
|
|
|
|
return rawSection?.title ?? null;
|
|
|
|
};
|
2023-12-12 09:58:20 +00:00
|
|
|
|
2024-02-21 19:22:01 +00:00
|
|
|
const formTableId = await getRealTableId(String(section.tableRef), {activeDoc, req});
|
|
|
|
const formTitle = section.title || getTableName() || formTableId;
|
2024-01-18 17:23:50 +00:00
|
|
|
|
2024-02-13 17:49:00 +00:00
|
|
|
this._grist.getTelemetry().logEvent(req, 'visitedForm', {
|
|
|
|
full: {
|
|
|
|
docIdDigest: activeDoc.docName,
|
|
|
|
userId: req.userId,
|
|
|
|
altSessionId: req.altSessionId,
|
|
|
|
},
|
|
|
|
});
|
2024-02-21 19:22:01 +00:00
|
|
|
|
|
|
|
res.status(200).json({
|
|
|
|
formFieldsById,
|
|
|
|
formLayoutSpec,
|
|
|
|
formTableId,
|
|
|
|
formTitle,
|
|
|
|
});
|
2023-12-12 09:58:20 +00:00
|
|
|
})
|
|
|
|
);
|
2024-04-18 12:13:16 +00:00
|
|
|
|
|
|
|
// GET /api/docs/:docId/timings
|
|
|
|
// Checks if timing is on for the document.
|
|
|
|
this._app.get('/api/docs/:docId/timing', isOwner, withDoc(async (activeDoc, req, res) => {
|
|
|
|
if (!activeDoc.isTimingOn) {
|
|
|
|
res.json({status: 'disabled'});
|
|
|
|
} else {
|
|
|
|
const timing = await activeDoc.getTimings();
|
|
|
|
const status = timing ? 'active' : 'pending';
|
|
|
|
res.json({status, timing});
|
|
|
|
}
|
|
|
|
}));
|
|
|
|
|
|
|
|
// POST /api/docs/:docId/timings/start
|
|
|
|
// Start a timing for the document.
|
|
|
|
this._app.post('/api/docs/:docId/timing/start', isOwner, withDoc(async (activeDoc, req, res) => {
|
|
|
|
if (activeDoc.isTimingOn) {
|
|
|
|
res.status(400).json({error:`Timing already started for ${activeDoc.docName}`});
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
// isTimingOn flag is switched synchronously.
|
|
|
|
await activeDoc.startTiming();
|
|
|
|
res.sendStatus(200);
|
|
|
|
}));
|
|
|
|
|
|
|
|
// POST /api/docs/:docId/timings/stop
|
|
|
|
// Stop a timing for the document.
|
|
|
|
this._app.post('/api/docs/:docId/timing/stop', isOwner, withDoc(async (activeDoc, req, res) => {
|
|
|
|
if (!activeDoc.isTimingOn) {
|
|
|
|
res.status(400).json({error:`Timing not started for ${activeDoc.docName}`});
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
res.json(await activeDoc.stopTiming());
|
|
|
|
}));
|
2020-07-21 13:20:51 +00:00
|
|
|
}
|
2023-09-06 18:35:46 +00:00
|
|
|
|
2024-01-12 17:35:24 +00:00
|
|
|
/**
|
2024-02-21 19:22:01 +00:00
|
|
|
* Throws if the specified section is not a published form.
|
2024-01-12 17:35:24 +00:00
|
|
|
*/
|
2024-02-21 19:22:01 +00:00
|
|
|
private _assertIsPublishedForm(params: {
|
|
|
|
docData: DocData,
|
2024-01-12 17:35:24 +00:00
|
|
|
linkId: string,
|
|
|
|
sectionId: number,
|
|
|
|
}) {
|
|
|
|
const {docData, linkId, sectionId} = params;
|
2024-01-24 09:58:19 +00:00
|
|
|
|
2024-01-12 17:35:24 +00:00
|
|
|
// Check that the request is for a valid section in the document.
|
|
|
|
const sections = docData.getMetaTable('_grist_Views_section');
|
2024-01-18 17:23:50 +00:00
|
|
|
const section = sections.getRecord(sectionId);
|
2024-02-21 19:22:01 +00:00
|
|
|
if (!section) { throw new ApiError('Form not found', 404, {code: 'FormNotFound'}); }
|
2024-01-12 17:35:24 +00:00
|
|
|
|
|
|
|
// Check that the section is for a form.
|
|
|
|
const sectionShareOptions = safeJsonParse(section.shareOptions, {});
|
2024-02-21 19:22:01 +00:00
|
|
|
if (!sectionShareOptions.form) { throw new ApiError('Form not found', 404, {code: 'FormNotFound'}); }
|
2024-01-12 17:35:24 +00:00
|
|
|
|
|
|
|
// Check that the form is associated with a share.
|
|
|
|
const viewId = section.parentId;
|
|
|
|
const pages = docData.getMetaTable('_grist_Pages');
|
|
|
|
const page = pages.getRecords().find(p => p.viewRef === viewId);
|
2024-02-21 19:22:01 +00:00
|
|
|
if (!page) { throw new ApiError('Form not found', 404, {code: 'FormNotFound'}); }
|
2024-01-24 09:58:19 +00:00
|
|
|
|
2024-01-12 17:35:24 +00:00
|
|
|
const shares = docData.getMetaTable('_grist_Shares');
|
|
|
|
const share = shares.getRecord(page.shareRef);
|
2024-02-21 19:22:01 +00:00
|
|
|
if (!share) { throw new ApiError('Form not found', 404, {code: 'FormNotFound'}); }
|
2024-01-12 17:35:24 +00:00
|
|
|
|
|
|
|
// Check that the share's link id matches the expected link id.
|
2024-02-21 19:22:01 +00:00
|
|
|
if (share.linkId !== linkId) { throw new ApiError('Form not found', 404, {code: 'FormNotFound'}); }
|
2024-01-12 17:35:24 +00:00
|
|
|
|
|
|
|
// Finally, check that both the section and share are published.
|
|
|
|
if (!sectionShareOptions.publish || !safeJsonParse(share.options, {})?.publish) {
|
2024-02-21 19:22:01 +00:00
|
|
|
throw new ApiError('Form not published', 404, {code: 'FormNotPublished'});
|
2024-01-12 17:35:24 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-09-06 18:35:46 +00:00
|
|
|
private async _copyDocToWorkspace(req: Request, options: {
|
|
|
|
userId: number,
|
|
|
|
sourceDocumentId: string,
|
|
|
|
workspaceId: number,
|
|
|
|
documentName: string,
|
|
|
|
asTemplate?: boolean,
|
|
|
|
}): Promise<string> {
|
2023-09-13 04:33:32 +00:00
|
|
|
const mreq = req as RequestWithLogin;
|
2023-09-06 18:35:46 +00:00
|
|
|
const {userId, sourceDocumentId, workspaceId, documentName, asTemplate = false} = options;
|
|
|
|
|
|
|
|
// First, upload a copy of the document.
|
|
|
|
let uploadResult;
|
|
|
|
try {
|
|
|
|
const accessId = makeAccessId(req, getAuthorizedUserId(req));
|
|
|
|
uploadResult = await fetchDoc(this._grist, sourceDocumentId, req, accessId, asTemplate);
|
|
|
|
globalUploadSet.changeUploadName(uploadResult.uploadId, accessId, `${documentName}.grist`);
|
|
|
|
} catch (err) {
|
|
|
|
if ((err as ApiError).status === 403) {
|
|
|
|
throw new ApiError('Insufficient access to document to copy it entirely', 403);
|
|
|
|
}
|
|
|
|
|
|
|
|
throw err;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Then, import the copy to the workspace.
|
2023-11-01 13:54:19 +00:00
|
|
|
const result = await this._docManager.importDocToWorkspace(mreq, {
|
2023-09-06 18:35:46 +00:00
|
|
|
userId,
|
|
|
|
uploadId: uploadResult.uploadId,
|
|
|
|
documentName,
|
|
|
|
workspaceId,
|
2023-09-13 04:33:32 +00:00
|
|
|
telemetryMetadata: {
|
|
|
|
limited: {
|
|
|
|
isImport: false,
|
|
|
|
sourceDocIdDigest: sourceDocumentId,
|
|
|
|
},
|
|
|
|
full: {
|
|
|
|
userId: mreq.userId,
|
|
|
|
altSessionId: mreq.altSessionId,
|
|
|
|
},
|
|
|
|
},
|
2023-09-06 18:35:46 +00:00
|
|
|
});
|
2023-11-15 20:20:51 +00:00
|
|
|
|
|
|
|
const sourceDocument = await this._dbManager.getRawDocById(sourceDocumentId);
|
|
|
|
const isTemplateCopy = sourceDocument.type === 'template';
|
|
|
|
if (isTemplateCopy) {
|
|
|
|
this._grist.getTelemetry().logEvent(mreq, 'copiedTemplate', {
|
|
|
|
full: {
|
|
|
|
templateId: parseUrlId(sourceDocument.urlId || sourceDocument.id).trunkId,
|
|
|
|
userId: mreq.userId,
|
|
|
|
altSessionId: mreq.altSessionId,
|
|
|
|
},
|
|
|
|
});
|
|
|
|
}
|
|
|
|
this._grist.getTelemetry().logEvent(
|
|
|
|
mreq,
|
|
|
|
`createdDoc-${isTemplateCopy ? 'CopyTemplate' : 'CopyDoc'}`,
|
|
|
|
{
|
|
|
|
full: {
|
|
|
|
docIdDigest: result.id,
|
|
|
|
userId: mreq.userId,
|
|
|
|
altSessionId: mreq.altSessionId,
|
|
|
|
},
|
|
|
|
}
|
|
|
|
);
|
|
|
|
|
2023-09-06 18:35:46 +00:00
|
|
|
return result.id;
|
|
|
|
}
|
|
|
|
|
|
|
|
private async _createNewSavedDoc(req: Request, options: {
|
|
|
|
workspaceId: number,
|
|
|
|
documentName?: string,
|
|
|
|
}): Promise<string> {
|
|
|
|
const {documentName, workspaceId} = options;
|
|
|
|
const {status, data, errMessage} = await this._dbManager.addDocument(getScope(req), workspaceId, {
|
|
|
|
name: documentName ?? 'Untitled document',
|
|
|
|
});
|
2023-11-15 20:20:51 +00:00
|
|
|
const docId = data!;
|
2023-09-06 18:35:46 +00:00
|
|
|
if (status !== 200) {
|
|
|
|
throw new ApiError(errMessage || 'unable to create document', status);
|
|
|
|
}
|
2023-09-13 04:33:32 +00:00
|
|
|
this._logDocumentCreatedTelemetryEvent(req, {
|
|
|
|
limited: {
|
2023-11-15 20:20:51 +00:00
|
|
|
docIdDigest: docId,
|
2023-09-13 04:33:32 +00:00
|
|
|
sourceDocIdDigest: undefined,
|
|
|
|
isImport: false,
|
|
|
|
fileType: undefined,
|
|
|
|
isSaved: true,
|
|
|
|
},
|
|
|
|
});
|
2023-11-15 20:20:51 +00:00
|
|
|
this._logCreatedEmptyDocTelemetryEvent(req, {
|
|
|
|
full: {
|
|
|
|
docIdDigest: docId,
|
|
|
|
},
|
|
|
|
});
|
|
|
|
return docId;
|
2023-09-06 18:35:46 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
private async _createNewUnsavedDoc(req: Request, options: {
|
|
|
|
userId: number,
|
|
|
|
browserSettings?: BrowserSettings,
|
|
|
|
}): Promise<string> {
|
|
|
|
const {userId, browserSettings} = options;
|
|
|
|
const isAnonymous = isAnonymousUser(req);
|
|
|
|
const result = makeForkIds({
|
|
|
|
userId,
|
|
|
|
isAnonymous,
|
|
|
|
trunkDocId: NEW_DOCUMENT_CODE,
|
|
|
|
trunkUrlId: NEW_DOCUMENT_CODE,
|
|
|
|
});
|
|
|
|
const docId = result.docId;
|
|
|
|
await this._docManager.createNamedDoc(
|
|
|
|
makeExceptionalDocSession('nascent', {
|
|
|
|
req: req as RequestWithLogin,
|
|
|
|
browserSettings,
|
|
|
|
}),
|
|
|
|
docId
|
|
|
|
);
|
2023-09-13 04:33:32 +00:00
|
|
|
this._logDocumentCreatedTelemetryEvent(req, {
|
|
|
|
limited: {
|
|
|
|
docIdDigest: docId,
|
|
|
|
sourceDocIdDigest: undefined,
|
|
|
|
isImport: false,
|
|
|
|
fileType: undefined,
|
|
|
|
isSaved: false,
|
|
|
|
},
|
|
|
|
});
|
2023-11-15 20:20:51 +00:00
|
|
|
this._logCreatedEmptyDocTelemetryEvent(req, {
|
|
|
|
full: {
|
|
|
|
docIdDigest: docId,
|
|
|
|
},
|
|
|
|
});
|
2023-09-06 18:35:46 +00:00
|
|
|
return docId;
|
|
|
|
}
|
|
|
|
|
2023-09-13 04:33:32 +00:00
|
|
|
private _logDocumentCreatedTelemetryEvent(req: Request, metadata: TelemetryMetadataByLevel) {
|
|
|
|
const mreq = req as RequestWithLogin;
|
2023-11-01 13:54:19 +00:00
|
|
|
this._grist.getTelemetry().logEvent(mreq, 'documentCreated', _.merge({
|
2023-09-13 04:33:32 +00:00
|
|
|
full: {
|
|
|
|
userId: mreq.userId,
|
|
|
|
altSessionId: mreq.altSessionId,
|
|
|
|
},
|
2023-11-01 13:54:19 +00:00
|
|
|
}, metadata));
|
2023-09-13 04:33:32 +00:00
|
|
|
}
|
|
|
|
|
2023-11-15 20:20:51 +00:00
|
|
|
private _logCreatedEmptyDocTelemetryEvent(req: Request, metadata: TelemetryMetadataByLevel) {
|
|
|
|
this._logCreatedDocTelemetryEvent(req, 'createdDoc-Empty', metadata);
|
|
|
|
}
|
|
|
|
|
|
|
|
private _logCreatedFileImportDocTelemetryEvent(req: Request, metadata: TelemetryMetadataByLevel) {
|
|
|
|
this._logCreatedDocTelemetryEvent(req, 'createdDoc-FileImport', metadata);
|
|
|
|
}
|
|
|
|
|
|
|
|
private _logCreatedDocTelemetryEvent(
|
|
|
|
req: Request,
|
|
|
|
event: 'createdDoc-Empty' | 'createdDoc-FileImport',
|
|
|
|
metadata: TelemetryMetadataByLevel,
|
|
|
|
) {
|
|
|
|
const mreq = req as RequestWithLogin;
|
|
|
|
this._grist.getTelemetry().logEvent(mreq, event, _.merge({
|
|
|
|
full: {
|
|
|
|
userId: mreq.userId,
|
|
|
|
altSessionId: mreq.altSessionId,
|
|
|
|
},
|
|
|
|
}, metadata));
|
|
|
|
}
|
|
|
|
|
2020-07-21 13:20:51 +00:00
|
|
|
/**
|
|
|
|
* Check for read access to the given document, and return its
|
|
|
|
* canonical docId. Throws error if read access not available.
|
|
|
|
* This method is used for documents that are not the main document
|
|
|
|
* associated with the request, but are rather an extra source to be
|
|
|
|
* read from, so the access information is not cached in the
|
|
|
|
* request.
|
|
|
|
*/
|
|
|
|
private async _confirmDocIdForRead(req: Request, urlId: string): Promise<string> {
|
2022-03-24 10:59:47 +00:00
|
|
|
const docAuth = await makeDocAuthResult(this._dbManager.getDoc({...getScope(req), urlId}));
|
2020-07-21 13:20:51 +00:00
|
|
|
if (docAuth.error) { throw docAuth.error; }
|
|
|
|
assertAccess('viewers', docAuth);
|
|
|
|
return docAuth.docId!;
|
|
|
|
}
|
|
|
|
|
2024-03-06 17:12:42 +00:00
|
|
|
private async _getDownloadFilename(req: Request, tableId?: string, optDoc?: Document): Promise<string> {
|
|
|
|
let filename = optStringParam(req.query.title, 'title');
|
|
|
|
if (!filename) {
|
|
|
|
// Query DB for doc metadata to get the doc data.
|
|
|
|
const doc = optDoc || await this._dbManager.getDoc(req);
|
|
|
|
const docTitle = doc.name;
|
|
|
|
const suffix = tableId ? (tableId === docTitle ? '' : `-${tableId}`) : '';
|
|
|
|
filename = docTitle + suffix || 'document';
|
|
|
|
}
|
|
|
|
return filename;
|
|
|
|
}
|
|
|
|
|
|
|
|
private async _getDownloadOptions(req: Request, doc?: Document): Promise<DownloadOptions> {
|
2022-09-14 18:55:44 +00:00
|
|
|
const params = parseExportParameters(req);
|
(core) API reworked to use POST to create webhook and DELET to remove it
Summary:
introduces POST /api/docs/{docId}/webhooks and DELETE /api/docs/{docId}/webhooks/{webhookId} on place of old _subscribe and _unsubscribe endpoints.
Remove checking for unsubscribeKey while deleting webhook - only owner can delete webhook using DELETE endpoint. subscription key is still needed for _unsubscribe endpoint.
old _unsubscribe and _subscribe endpoints are still active and work as before - no changes there.
Posting schema:
```
POST /api/docs/[docId]/webhooks
```
Request Body:
```
{
"webhooks": [
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb1",
"eventTypes": [
"add",
"update"
],
"enabled": true,
"name": "WebhookName",
"memo": "just a text",
"tableId": "Table1"
}
},
{
"fields": {
"url": "https://webhook.site/3bd02246-f122-445e-ba7f-bf5ea5bb6eb2",
"eventTypes": [
"add",
],
"enabled": true,
"name": "OtherWebhookName",
"memo": "just a text",
"tableId": "Table1"
}
}
]
}
```
Expected response: WebhookId for each webhook posted:
```
{
"webhooks": [
{
"id": "85c77108-f1e1-4217-a50d-acd1c5996da2"
},
{
"id": "d87a6402-cfd7-4822-878c-657308fcc8c3"
}
]
}
```
Deleting webhooks:
```
DELETE api/docs/[docId]/webhooks/[webhookId]
```
there is no payload in DELETE request. Therefore only one webhook can be deleted at once
Response:
```
{
"success": true
}
```
Test Plan: Old unit test improved to handle new endpoints, and one more added to check if endpoints are in fact created/removed
Reviewers: alexmojaki
Reviewed By: alexmojaki
Subscribers: paulfitz, alexmojaki
Differential Revision: https://phab.getgrist.com/D3916
2023-07-14 10:05:22 +00:00
|
|
|
return {
|
2022-09-14 18:55:44 +00:00
|
|
|
...params,
|
2024-03-06 17:12:42 +00:00
|
|
|
filename: await this._getDownloadFilename(req, params.tableId, doc),
|
2022-12-27 18:35:03 +00:00
|
|
|
};
|
2022-09-14 18:55:44 +00:00
|
|
|
}
|
|
|
|
|
2020-07-21 13:20:51 +00:00
|
|
|
private _getActiveDoc(req: RequestWithLogin): Promise<ActiveDoc> {
|
2020-09-11 20:27:09 +00:00
|
|
|
return this._docManager.fetchDoc(docSessionFromRequest(req), getDocId(req));
|
2020-07-21 13:20:51 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
private _getActiveDocIfAvailable(req: RequestWithLogin): Promise<ActiveDoc>|undefined {
|
|
|
|
return this._docManager.getActiveDoc(getDocId(req));
|
|
|
|
}
|
|
|
|
|
2022-03-21 20:22:35 +00:00
|
|
|
/**
|
|
|
|
* Middleware to track the number of requests outstanding on each document, and to
|
|
|
|
* throw an exception when the maximum number of requests are already outstanding.
|
|
|
|
* Also throws an exception if too many requests (based on the user's product plan)
|
|
|
|
* have been made today for this document.
|
|
|
|
* Access to a document must already have been authorized.
|
|
|
|
*/
|
|
|
|
private _apiThrottle(callback: (req: RequestWithLogin,
|
|
|
|
resp: Response,
|
|
|
|
next: NextFunction) => void | Promise<void>): RequestHandler {
|
|
|
|
return async (req, res, next) => {
|
|
|
|
const docId = getDocId(req);
|
|
|
|
try {
|
2022-04-28 11:51:55 +00:00
|
|
|
const count = this._currentUsage.get(docId) || 0;
|
|
|
|
this._currentUsage.set(docId, count + 1);
|
2022-03-21 20:22:35 +00:00
|
|
|
if (count + 1 > MAX_PARALLEL_REQUESTS_PER_DOC) {
|
|
|
|
throw new ApiError(`Too many backlogged requests for document ${docId} - ` +
|
|
|
|
`try again later?`, 429);
|
|
|
|
}
|
|
|
|
|
2022-04-28 11:51:55 +00:00
|
|
|
if (await this._checkDailyDocApiUsage(req, docId)) {
|
2022-03-21 20:22:35 +00:00
|
|
|
throw new ApiError(`Exceeded daily limit for document ${docId}`, 429);
|
|
|
|
}
|
|
|
|
|
|
|
|
await callback(req as RequestWithLogin, res, next);
|
|
|
|
} catch (err) {
|
|
|
|
next(err);
|
|
|
|
} finally {
|
2022-04-28 11:51:55 +00:00
|
|
|
const count = this._currentUsage.get(docId);
|
2022-03-21 20:22:35 +00:00
|
|
|
if (count) {
|
|
|
|
if (count === 1) {
|
2022-04-28 11:51:55 +00:00
|
|
|
this._currentUsage.delete(docId);
|
2022-03-21 20:22:35 +00:00
|
|
|
} else {
|
2022-04-28 11:51:55 +00:00
|
|
|
this._currentUsage.set(docId, count - 1);
|
2022-03-21 20:22:35 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
};
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Usually returns true if too many requests (based on the user's product plan)
|
2022-04-28 11:51:55 +00:00
|
|
|
* have been made today for this document and the request should be rejected.
|
2022-03-21 20:22:35 +00:00
|
|
|
* Access to a document must already have been authorized.
|
|
|
|
* This is called frequently so it uses caches to check quickly in the common case,
|
|
|
|
* which allows a few ways for users to exceed the limit slightly if the timing works out,
|
|
|
|
* but these should be acceptable.
|
|
|
|
*/
|
2022-04-28 11:51:55 +00:00
|
|
|
private async _checkDailyDocApiUsage(req: Request, docId: string): Promise<boolean> {
|
|
|
|
// Use the cached doc to avoid a database call.
|
|
|
|
// This leaves a small window (currently 5 seconds) for the user to bypass this limit after downgrading,
|
|
|
|
// or to be wrongly rejected after upgrading.
|
|
|
|
const doc = (req as RequestWithLogin).docAuth!.cachedDoc!;
|
2022-03-21 20:22:35 +00:00
|
|
|
|
2024-05-17 19:14:34 +00:00
|
|
|
const max = doc.workspace.org.billingAccount?.getFeatures().baseMaxApiUnitsPerDocumentPerDay;
|
2022-03-21 20:22:35 +00:00
|
|
|
if (!max) {
|
|
|
|
// This doc has no associated product (happens to new unsaved docs)
|
2022-04-28 11:51:55 +00:00
|
|
|
// or the product has no API limit. Allow the request through.
|
|
|
|
return false;
|
2022-03-21 20:22:35 +00:00
|
|
|
}
|
|
|
|
|
2022-04-28 11:51:55 +00:00
|
|
|
// Check the counts in the dailyUsage cache rather than waiting for redis.
|
|
|
|
// The cache will not have counts if this is the first request for this document served by this worker process
|
|
|
|
// or if so many other documents have been served since then that the keys were evicted from the LRU cache.
|
2022-03-21 20:22:35 +00:00
|
|
|
// Both scenarios are temporary and unlikely when usage has been exceeded.
|
2022-04-28 11:51:55 +00:00
|
|
|
// Note that if the limits are exceeded then `keys` below will be undefined,
|
|
|
|
// otherwise it will be an array of three keys corresponding to a day, hour, and minute.
|
|
|
|
const m = moment.utc();
|
|
|
|
const keys = getDocApiUsageKeysToIncr(docId, this._dailyUsage, max, m);
|
|
|
|
if (!keys) {
|
|
|
|
// The limit has been exceeded, reject the request.
|
|
|
|
return true;
|
2022-03-21 20:22:35 +00:00
|
|
|
}
|
|
|
|
|
2022-05-16 17:41:12 +00:00
|
|
|
// If Redis isn't configured, this is as far as we can go with checks.
|
|
|
|
if (!process.env.REDIS_URL) { return false; }
|
|
|
|
|
2022-03-21 20:22:35 +00:00
|
|
|
// Note the increased API usage on redis and in our local cache.
|
2022-04-28 11:51:55 +00:00
|
|
|
// Update redis in the background so that the rest of the request can continue without waiting for redis.
|
2022-07-19 15:39:49 +00:00
|
|
|
const cli = this._docWorkerMap.getRedisClient();
|
|
|
|
if (!cli) { throw new Error('redis unexpectedly not available'); }
|
|
|
|
const multi = cli.multi();
|
2022-04-28 11:51:55 +00:00
|
|
|
for (let i = 0; i < keys.length; i++) {
|
|
|
|
const key = keys[i];
|
|
|
|
// Incrementing the local count immediately prevents many requests from being squeezed through every minute
|
|
|
|
// before counts are received from redis.
|
|
|
|
// But this cache is not 100% reliable and the count from redis may be higher.
|
|
|
|
this._dailyUsage.set(key, (this._dailyUsage.get(key) ?? 0) + 1);
|
|
|
|
const period = docApiUsagePeriods[i];
|
|
|
|
// Expire the key just so that it cleans itself up and saves memory on redis.
|
|
|
|
// Expire after two periods to handle 'next' buckets.
|
|
|
|
const expiry = 2 * 24 * 60 * 60 / period.periodsPerDay;
|
|
|
|
multi.incr(key).expire(key, expiry);
|
|
|
|
}
|
|
|
|
multi.execAsync().then(result => {
|
|
|
|
for (let i = 0; i < keys.length; i++) {
|
|
|
|
const key = keys[i];
|
|
|
|
const newCount = Number(result![i * 2]); // incrs are at even positions, expires at odd positions
|
2022-03-21 20:22:35 +00:00
|
|
|
// Theoretically this could be overwritten by a lower count that was requested earlier
|
|
|
|
// but somehow arrived after.
|
|
|
|
// This doesn't really matter, and the count on redis will still increase reliably.
|
2022-04-28 11:51:55 +00:00
|
|
|
this._dailyUsage.set(key, newCount);
|
2022-03-21 20:22:35 +00:00
|
|
|
}
|
|
|
|
}).catch(e => console.error(`Error tracking API usage for doc ${docId}`, e));
|
2022-04-28 11:51:55 +00:00
|
|
|
|
|
|
|
// Allow the request through.
|
|
|
|
return false;
|
2022-03-21 20:22:35 +00:00
|
|
|
}
|
|
|
|
|
2023-07-05 15:36:45 +00:00
|
|
|
/**
|
|
|
|
* Creates a middleware that checks the current usage of a limit and rejects the request if it is exceeded.
|
|
|
|
*/
|
|
|
|
private async _checkLimit(limit: LimitType, req: Request, res: Response, next: NextFunction) {
|
|
|
|
await this._dbManager.increaseUsage(getDocScope(req), limit, {dryRun: true, delta: 1});
|
|
|
|
next();
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Increases the current usage of a limit by 1.
|
|
|
|
*/
|
|
|
|
private async _increaseLimit(limit: LimitType, req: Request) {
|
2023-08-30 15:58:18 +00:00
|
|
|
return await this._dbManager.increaseUsage(getDocScope(req), limit, {delta: 1});
|
2023-07-05 15:36:45 +00:00
|
|
|
}
|
|
|
|
|
2023-09-08 13:05:52 +00:00
|
|
|
/**
|
|
|
|
* Disallow document creation for anonymous users if GRIST_ANONYMOUS_CREATION is set to false.
|
|
|
|
*/
|
|
|
|
private async _checkAnonymousCreation(req: Request, res: Response, next: NextFunction) {
|
|
|
|
const isAnonPlayground = isAffirmative(process.env.GRIST_ANON_PLAYGROUND ?? true);
|
|
|
|
if (isAnonymousUser(req) && !isAnonPlayground) {
|
|
|
|
throw new ApiError('Anonymous document creation is disabled', 403);
|
|
|
|
}
|
|
|
|
next();
|
|
|
|
}
|
|
|
|
|
2020-12-18 17:37:16 +00:00
|
|
|
private async _assertAccess(role: 'viewers'|'editors'|'owners'|null, allowRemoved: boolean,
|
2020-07-21 13:20:51 +00:00
|
|
|
req: Request, res: Response, next: NextFunction) {
|
|
|
|
const scope = getDocScope(req);
|
|
|
|
allowRemoved = scope.showAll || scope.showRemoved || allowRemoved;
|
2022-07-19 15:39:49 +00:00
|
|
|
const docAuth = await getOrSetDocAuth(req as RequestWithLogin, this._dbManager, this._grist, scope.urlId);
|
(core) support GRIST_WORKER_GROUP to place worker into an exclusive group
Summary:
In an emergency, we may want to serve certain documents with "old" workers as we fix problems. This diff adds some support for that.
* Creates duplicate task definitions and services for staging and production doc workers (called grist-docs-staging2 and grist-docs-prod2), pulling from distinct docker tags (staging2 and prod2). The services are set to have zero workers until we need them.
* These new workers are started with a new env variable `GRIST_WORKER_GROUP` set to `secondary`.
* The `GRIST_WORKER_GROUP` variable, if set, makes the worker available to documents in the named group, and only that group.
* An unauthenticated `/assign` endpoint is added to documents which, when POSTed to, checks that the doc is served by a worker in the desired group for that doc (as set manually in redis), and if not frees the doc up for reassignment. This makes it possible to move individual docs between workers without redeployments.
The bash scripts added are a record of how the task definitions + services were created. The services could just have been copied manually, but the task definitions will need to be updated whenever the definitions for the main doc workers are updated, so it is worth scripting that.
For example, if a certain document were to fail on a new deployment of Grist, but rolling back the full deployment wasn't practical:
* Set prod2 tag in docker to desired codebase for that document
* Set desired_count for grist-docs-prod2 service to non-zero
* Set doc-<docid>-group for that doc in redis to secondary
* Hit /api/docs/<docid>/assign to move the doc to grist-docs-prod2
(If the document needs to be reverted to a previous snapshot, that currently would need doing manually - could be made simpler, but not in scope of this diff).
Test Plan: added tests
Reviewers: dsagal
Reviewed By: dsagal
Differential Revision: https://phab.getgrist.com/D2649
2020-11-02 19:24:46 +00:00
|
|
|
if (role) { assertAccess(role, docAuth, {allowRemoved}); }
|
2020-07-21 13:20:51 +00:00
|
|
|
next();
|
|
|
|
}
|
|
|
|
|
2020-09-11 20:27:09 +00:00
|
|
|
/**
|
|
|
|
* Check if user is an owner of the document.
|
2022-12-02 18:51:44 +00:00
|
|
|
* If acceptTrunkForSnapshot is set, being an owner of the trunk of the document (if it is a snapshot)
|
|
|
|
* is sufficient. Uses cachedDoc, which could be stale if access has changed recently.
|
2020-09-11 20:27:09 +00:00
|
|
|
*/
|
2022-12-02 18:51:44 +00:00
|
|
|
private async _isOwner(req: Request, options?: { acceptTrunkForSnapshot?: boolean }) {
|
2020-09-11 20:27:09 +00:00
|
|
|
const scope = getDocScope(req);
|
2022-07-19 15:39:49 +00:00
|
|
|
const docAuth = await getOrSetDocAuth(req as RequestWithLogin, this._dbManager, this._grist, scope.urlId);
|
2022-12-02 18:51:44 +00:00
|
|
|
if (docAuth.access === 'owners') {
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
if (options?.acceptTrunkForSnapshot && docAuth.cachedDoc?.trunkAccess === 'owners') {
|
|
|
|
const parts = parseUrlId(scope.urlId);
|
|
|
|
if (parts.snapshotId) { return true; }
|
|
|
|
}
|
|
|
|
return false;
|
2020-09-11 20:27:09 +00:00
|
|
|
}
|
|
|
|
|
2020-07-21 13:20:51 +00:00
|
|
|
// Helper to generate a 503 if the ActiveDoc has been muted.
|
|
|
|
private _checkForMute(activeDoc: ActiveDoc|undefined) {
|
|
|
|
if (activeDoc && activeDoc.muted) {
|
|
|
|
throw new ApiError('Document in flux - try again later', 503);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Throws an error if, during processing, the ActiveDoc becomes "muted". Also replaces any
|
|
|
|
* other error that may have occurred if the ActiveDoc becomes "muted", since the document
|
|
|
|
* shutting down during processing may have caused a variety of errors.
|
|
|
|
*
|
|
|
|
* Expects to be called within a handler that catches exceptions.
|
|
|
|
*/
|
|
|
|
private _requireActiveDoc(callback: WithDocHandler): RequestHandler {
|
|
|
|
return async (req, res) => {
|
|
|
|
let activeDoc: ActiveDoc|undefined;
|
|
|
|
try {
|
|
|
|
activeDoc = await this._getActiveDoc(req as RequestWithLogin);
|
|
|
|
await callback(activeDoc, req as RequestWithLogin, res);
|
|
|
|
if (!res.headersSent) { this._checkForMute(activeDoc); }
|
|
|
|
} catch (err) {
|
|
|
|
this._checkForMute(activeDoc);
|
|
|
|
throw err;
|
|
|
|
}
|
|
|
|
};
|
|
|
|
}
|
|
|
|
|
2020-09-11 20:27:09 +00:00
|
|
|
private async _getStates(docSession: OptDocSession, activeDoc: ActiveDoc): Promise<DocStates> {
|
|
|
|
const states = await activeDoc.getRecentStates(docSession);
|
2020-07-21 13:20:51 +00:00
|
|
|
return {
|
|
|
|
states,
|
|
|
|
};
|
|
|
|
}
|
|
|
|
|
2020-09-18 18:43:01 +00:00
|
|
|
/**
|
|
|
|
*
|
|
|
|
* Calculate changes between two document versions identified by leftHash and rightHash.
|
|
|
|
* If rightHash is the latest version of the document, the ActionSummary for it will
|
|
|
|
* contain a copy of updated and added rows.
|
|
|
|
*
|
|
|
|
* Currently will fail if leftHash is not an ancestor of rightHash (this restriction could
|
|
|
|
* be lifted, but is adequate for now).
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
private async _getChanges(docSession: OptDocSession, activeDoc: ActiveDoc, states: DocState[],
|
|
|
|
leftHash: string, rightHash: string): Promise<DocStateComparison> {
|
|
|
|
const finder = new HashUtil(states);
|
|
|
|
const leftOffset = finder.hashToOffset(leftHash);
|
|
|
|
const rightOffset = finder.hashToOffset(rightHash);
|
|
|
|
if (rightOffset > leftOffset) {
|
|
|
|
throw new Error('Comparisons currently require left to be an ancestor of right');
|
|
|
|
}
|
|
|
|
const actionNums: number[] = states.slice(rightOffset, leftOffset).map(state => state.n);
|
|
|
|
const actions = (await activeDoc.getActions(actionNums)).reverse();
|
|
|
|
let totalAction = createEmptyActionSummary();
|
|
|
|
for (const action of actions) {
|
|
|
|
if (!action) { continue; }
|
|
|
|
const summary = summarizeAction(action);
|
|
|
|
totalAction = concatenateSummaries([totalAction, summary]);
|
|
|
|
}
|
|
|
|
const result: DocStateComparison = {
|
|
|
|
left: states[leftOffset],
|
|
|
|
right: states[rightOffset],
|
|
|
|
parent: states[leftOffset],
|
|
|
|
summary: (leftOffset === rightOffset) ? 'same' : 'right',
|
|
|
|
details: {
|
|
|
|
leftChanges: {tableRenames: [], tableDeltas: {}},
|
|
|
|
rightChanges: totalAction
|
|
|
|
}
|
|
|
|
};
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
2020-07-21 13:20:51 +00:00
|
|
|
private async _removeDoc(req: Request, res: Response, permanent: boolean) {
|
2024-02-13 17:49:00 +00:00
|
|
|
const mreq = req as RequestWithLogin;
|
2020-07-21 13:20:51 +00:00
|
|
|
const scope = getDocScope(req);
|
|
|
|
const docId = getDocId(req);
|
|
|
|
if (permanent) {
|
2023-02-20 02:51:40 +00:00
|
|
|
const {forkId} = parseUrlId(docId);
|
|
|
|
if (!forkId) {
|
|
|
|
// Soft delete the doc first, to de-list the document.
|
|
|
|
await this._dbManager.softDeleteDocument(scope);
|
|
|
|
}
|
|
|
|
// Delete document content from storage. Include forks if doc is a trunk.
|
|
|
|
const forks = forkId ? [] : await this._dbManager.getDocForks(docId);
|
|
|
|
const docsToDelete = [
|
|
|
|
docId,
|
|
|
|
...forks.map((fork) =>
|
|
|
|
buildUrlId({forkId: fork.id, forkUserId: fork.createdBy!, trunkId: docId})),
|
|
|
|
];
|
|
|
|
await Promise.all(docsToDelete.map(docName => this._docManager.deleteDoc(null, docName, true)));
|
2021-09-08 13:38:26 +00:00
|
|
|
// Permanently delete from database.
|
2023-03-22 13:48:50 +00:00
|
|
|
const query = await this._dbManager.deleteDocument(scope);
|
2021-09-08 13:38:26 +00:00
|
|
|
this._dbManager.checkQueryResult(query);
|
2024-02-13 17:49:00 +00:00
|
|
|
this._grist.getTelemetry().logEvent(mreq, 'deletedDoc', {
|
|
|
|
full: {
|
|
|
|
docIdDigest: docId,
|
|
|
|
userId: mreq.userId,
|
|
|
|
altSessionId: mreq.altSessionId,
|
|
|
|
},
|
|
|
|
});
|
2020-07-21 13:20:51 +00:00
|
|
|
await sendReply(req, res, query);
|
|
|
|
} else {
|
|
|
|
await this._dbManager.softDeleteDocument(scope);
|
|
|
|
await sendOkReply(req, res);
|
|
|
|
}
|
|
|
|
await this._dbManager.flushSingleDocAuthCache(scope, docId);
|
|
|
|
await this._docManager.interruptDocClients(docId);
|
|
|
|
}
|
2023-09-04 13:21:18 +00:00
|
|
|
|
|
|
|
private async _runSql(activeDoc: ActiveDoc, req: RequestWithLogin, res: Response,
|
|
|
|
options: Types.SqlPost) {
|
|
|
|
if (!await activeDoc.canCopyEverything(docSessionFromRequest(req))) {
|
|
|
|
throw new ApiError('insufficient document access', 403);
|
|
|
|
}
|
|
|
|
const statement = options.sql;
|
|
|
|
// A very loose test, just for early error message
|
|
|
|
if (!(statement.toLowerCase().includes('select'))) {
|
|
|
|
throw new ApiError('only select statements are supported', 400);
|
|
|
|
}
|
|
|
|
const sqlOptions = activeDoc.docStorage.getOptions();
|
|
|
|
if (!sqlOptions?.canInterrupt || !sqlOptions?.bindableMethodsProcessOneStatement) {
|
|
|
|
throw new ApiError('The available SQLite wrapper is not adequate', 500);
|
|
|
|
}
|
|
|
|
const timeout =
|
|
|
|
Math.max(0, Math.min(MAX_CUSTOM_SQL_MSEC,
|
2023-09-05 18:27:35 +00:00
|
|
|
optIntegerParam(options.timeout, 'timeout') || MAX_CUSTOM_SQL_MSEC));
|
2023-09-04 13:21:18 +00:00
|
|
|
// Wrap in a select to commit to the SELECT branch of SQLite
|
|
|
|
// grammar. Note ; isn't a problem.
|
|
|
|
//
|
|
|
|
// The underlying SQLite functions used will only process the
|
|
|
|
// first statement in the supplied text. For node-sqlite3, the
|
|
|
|
// remainder is placed in a "tail string" ignored by that library.
|
|
|
|
// So a Robert'); DROP TABLE Students;-- style attack isn't applicable.
|
|
|
|
//
|
|
|
|
// Since Grist is used with multiple SQLite wrappers, not just
|
|
|
|
// node-sqlite3, we have added a bindableMethodsProcessOneStatement
|
|
|
|
// flag that will need adding for each wrapper, and this endpoint
|
|
|
|
// will not operate unless that flag is set to true.
|
|
|
|
//
|
|
|
|
// The text is wrapped in select * from (USER SUPPLIED TEXT) which
|
|
|
|
// puts SQLite unconditionally onto the SELECT branch of its
|
|
|
|
// grammar. It is straightforward to break out of such a wrapper
|
|
|
|
// with multiple statements, but again, only the first statement
|
|
|
|
// is processed.
|
|
|
|
const wrappedStatement = `select * from (${statement})`;
|
|
|
|
const interrupt = setTimeout(async () => {
|
|
|
|
await activeDoc.docStorage.interrupt();
|
|
|
|
}, timeout);
|
|
|
|
try {
|
|
|
|
const records = await activeDoc.docStorage.all(wrappedStatement,
|
|
|
|
...(options.args || []));
|
|
|
|
res.status(200).json({
|
|
|
|
statement,
|
|
|
|
records: records.map(
|
|
|
|
rec => ({
|
|
|
|
fields: rec,
|
|
|
|
})
|
|
|
|
),
|
|
|
|
});
|
|
|
|
} catch (e) {
|
|
|
|
if (e?.code === 'SQLITE_INTERRUPT') {
|
|
|
|
res.status(400).json({
|
|
|
|
error: "a slow statement resulted in a database interrupt",
|
|
|
|
});
|
|
|
|
} else if (e?.code === 'SQLITE_ERROR') {
|
|
|
|
res.status(400).json({
|
|
|
|
error: e?.message,
|
|
|
|
});
|
|
|
|
} else {
|
|
|
|
throw e;
|
|
|
|
}
|
|
|
|
} finally {
|
|
|
|
clearTimeout(interrupt);
|
|
|
|
}
|
|
|
|
}
|
2020-07-21 13:20:51 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
export function addDocApiRoutes(
|
(core) support GRIST_WORKER_GROUP to place worker into an exclusive group
Summary:
In an emergency, we may want to serve certain documents with "old" workers as we fix problems. This diff adds some support for that.
* Creates duplicate task definitions and services for staging and production doc workers (called grist-docs-staging2 and grist-docs-prod2), pulling from distinct docker tags (staging2 and prod2). The services are set to have zero workers until we need them.
* These new workers are started with a new env variable `GRIST_WORKER_GROUP` set to `secondary`.
* The `GRIST_WORKER_GROUP` variable, if set, makes the worker available to documents in the named group, and only that group.
* An unauthenticated `/assign` endpoint is added to documents which, when POSTed to, checks that the doc is served by a worker in the desired group for that doc (as set manually in redis), and if not frees the doc up for reassignment. This makes it possible to move individual docs between workers without redeployments.
The bash scripts added are a record of how the task definitions + services were created. The services could just have been copied manually, but the task definitions will need to be updated whenever the definitions for the main doc workers are updated, so it is worth scripting that.
For example, if a certain document were to fail on a new deployment of Grist, but rolling back the full deployment wasn't practical:
* Set prod2 tag in docker to desired codebase for that document
* Set desired_count for grist-docs-prod2 service to non-zero
* Set doc-<docid>-group for that doc in redis to secondary
* Hit /api/docs/<docid>/assign to move the doc to grist-docs-prod2
(If the document needs to be reverted to a previous snapshot, that currently would need doing manually - could be made simpler, but not in scope of this diff).
Test Plan: added tests
Reviewers: dsagal
Reviewed By: dsagal
Differential Revision: https://phab.getgrist.com/D2649
2020-11-02 19:24:46 +00:00
|
|
|
app: Application, docWorker: DocWorker, docWorkerMap: IDocWorkerMap, docManager: DocManager, dbManager: HomeDBManager,
|
2024-02-21 19:22:01 +00:00
|
|
|
grist: GristServer
|
2020-07-21 13:20:51 +00:00
|
|
|
) {
|
2024-02-21 19:22:01 +00:00
|
|
|
const api = new DocWorkerApi(app, docWorker, docWorkerMap, docManager, dbManager, grist);
|
2020-07-21 13:20:51 +00:00
|
|
|
api.addEndpoints();
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Options for returning results from a query about document data.
|
|
|
|
* Currently these option don't affect the query itself, only the
|
|
|
|
* results returned to the user.
|
|
|
|
*/
|
|
|
|
export interface QueryParameters {
|
2021-11-03 11:44:28 +00:00
|
|
|
sort?: string[]; // Columns names to sort by (ascending order by default,
|
|
|
|
// prepend "-" for descending order, can contain flags,
|
|
|
|
// see more in Sort.SortSpec).
|
2020-07-21 13:20:51 +00:00
|
|
|
limit?: number; // Limit on number of rows to return.
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Extract a sort parameter from a request, if present. Follows
|
|
|
|
* https://jsonapi.org/format/#fetching-sorting for want of a better
|
|
|
|
* standard - comma separated, defaulting to ascending order, keys
|
|
|
|
* prefixed by "-" for descending order.
|
|
|
|
*
|
|
|
|
* The sort parameter can either be given as a query parameter, or
|
|
|
|
* as a header.
|
|
|
|
*/
|
|
|
|
function getSortParameter(req: Request): string[]|undefined {
|
2023-09-05 18:27:35 +00:00
|
|
|
const sortString: string|undefined = optStringParam(req.query.sort, 'sort') || req.get('X-Sort');
|
2020-07-21 13:20:51 +00:00
|
|
|
if (!sortString) { return undefined; }
|
|
|
|
return sortString.split(',');
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Extract a limit parameter from a request, if present. Should be a
|
|
|
|
* simple integer. The limit parameter can either be given as a query
|
|
|
|
* parameter, or as a header.
|
|
|
|
*/
|
|
|
|
function getLimitParameter(req: Request): number|undefined {
|
2023-09-05 18:27:35 +00:00
|
|
|
const limitString: string|undefined = optStringParam(req.query.limit, 'limit') || req.get('X-Limit');
|
2020-07-21 13:20:51 +00:00
|
|
|
if (!limitString) { return undefined; }
|
|
|
|
const limit = parseInt(limitString, 10);
|
|
|
|
if (isNaN(limit)) { throw new Error('limit is not a number'); }
|
|
|
|
return limit;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Extract sort and limit parameters from request, if they are present.
|
|
|
|
*/
|
|
|
|
function getQueryParameters(req: Request): QueryParameters {
|
|
|
|
return {
|
|
|
|
sort: getSortParameter(req),
|
|
|
|
limit: getLimitParameter(req),
|
|
|
|
};
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Sort table contents being returned. Sort keys with a '-' prefix
|
|
|
|
* are sorted in descending order, otherwise ascending. Contents are
|
2021-11-03 11:44:28 +00:00
|
|
|
* modified in place. Sort keys can contain sort options.
|
|
|
|
* Columns can be either expressed as a colId (name string) or as colRef (rowId number).
|
2020-07-21 13:20:51 +00:00
|
|
|
*/
|
2021-11-03 11:44:28 +00:00
|
|
|
function applySort(
|
|
|
|
values: TableColValues,
|
|
|
|
sort: string[],
|
|
|
|
_columns: TableRecordValue[]|null = null) {
|
2020-07-21 13:20:51 +00:00
|
|
|
if (!sort) { return values; }
|
2021-11-03 11:44:28 +00:00
|
|
|
|
|
|
|
// First we need to prepare column description in ColValue format (plain objects).
|
|
|
|
// This format is used by ServerColumnGetters.
|
|
|
|
let properColumns: ColValues[] = [];
|
|
|
|
|
|
|
|
// We will receive columns information only for user tables, not for metatables. So
|
|
|
|
// if this is the case, we will infer them from the result.
|
|
|
|
if (!_columns) {
|
|
|
|
_columns = Object.keys(values).map((col, index) => ({ id: col, fields: { colRef: index }}));
|
|
|
|
}
|
|
|
|
// For user tables, we will not get id column (as this column is not in the schema), so we need to
|
|
|
|
// make sure the column is there.
|
|
|
|
else {
|
|
|
|
// This is enough information for ServerGetters
|
|
|
|
_columns = [..._columns, { id : 'id', fields: {colRef: 0 }}];
|
|
|
|
}
|
|
|
|
|
|
|
|
// Once we have proper columns, we can convert them to format that ServerColumnGetters
|
|
|
|
// understand.
|
|
|
|
properColumns = _columns.map(c => ({
|
|
|
|
...c.fields,
|
|
|
|
id : c.fields.colRef,
|
|
|
|
colId: c.id
|
|
|
|
}));
|
|
|
|
|
|
|
|
// We will sort row indices in the values object, not rows ids.
|
|
|
|
const rowIndices = values.id.map((__, i) => i);
|
|
|
|
const getters = new ServerColumnGetters(rowIndices, values, properColumns);
|
|
|
|
const sortFunc = new SortFunc(getters);
|
|
|
|
const colIdToRef = new Map(properColumns.map(({id, colId}) => [colId as string, id as number]));
|
|
|
|
sortFunc.updateSpec(Sort.parseNames(sort, colIdToRef));
|
|
|
|
rowIndices.sort(sortFunc.compare.bind(sortFunc));
|
|
|
|
|
|
|
|
// Sort resulting values according to the sorted index.
|
2020-07-21 13:20:51 +00:00
|
|
|
for (const key of Object.keys(values)) {
|
|
|
|
const col = values[key];
|
2021-11-03 11:44:28 +00:00
|
|
|
values[key] = rowIndices.map(i => col[i]);
|
2020-07-21 13:20:51 +00:00
|
|
|
}
|
|
|
|
return values;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Truncate columns to the first N values. Columns are modified in place.
|
|
|
|
*/
|
|
|
|
function applyLimit(values: TableColValues, limit: number) {
|
|
|
|
// for no limit, or 0 limit, do not apply any restriction
|
|
|
|
if (!limit) { return values; }
|
|
|
|
for (const key of Object.keys(values)) {
|
|
|
|
values[key].splice(limit);
|
|
|
|
}
|
|
|
|
return values;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Apply query parameters to table contents. Contents are modified in place.
|
|
|
|
*/
|
2021-11-03 11:44:28 +00:00
|
|
|
export function applyQueryParameters(
|
|
|
|
values: TableColValues,
|
|
|
|
params: QueryParameters,
|
|
|
|
columns: TableRecordValue[]|null = null): TableColValues {
|
|
|
|
if (params.sort) { applySort(values, params.sort, columns); }
|
2020-07-21 13:20:51 +00:00
|
|
|
if (params.limit) { applyLimit(values, params.limit); }
|
|
|
|
return values;
|
|
|
|
}
|
2022-03-15 14:35:15 +00:00
|
|
|
|
|
|
|
function getErrorPlatform(tableId: string): TableOperationsPlatform {
|
|
|
|
return {
|
|
|
|
async getTableId() { return tableId; },
|
|
|
|
throwError(verb, text, status) {
|
|
|
|
throw new ApiError(verb + (verb ? ' ' : '') + text, status);
|
|
|
|
},
|
|
|
|
applyUserActions() {
|
|
|
|
throw new Error('no document');
|
|
|
|
}
|
|
|
|
};
|
|
|
|
}
|
|
|
|
|
2023-10-12 17:32:22 +00:00
|
|
|
export async function getMetaTables(activeDoc: ActiveDoc, req: RequestWithLogin) {
|
|
|
|
return await handleSandboxError("", [],
|
|
|
|
activeDoc.fetchMetaTables(docSessionFromRequest(req)));
|
|
|
|
}
|
|
|
|
|
|
|
|
async function getTableOperations(
|
|
|
|
req: RequestWithLogin,
|
|
|
|
activeDoc: ActiveDoc,
|
|
|
|
tableId?: string): Promise<TableOperationsImpl> {
|
2022-03-15 14:35:15 +00:00
|
|
|
const options: OpOptions = {
|
|
|
|
parseStrings: !isAffirmative(req.query.noparse)
|
|
|
|
};
|
2023-10-12 17:32:22 +00:00
|
|
|
const realTableId = await getRealTableId(tableId ?? req.params.tableId, {activeDoc, req});
|
2022-03-15 14:35:15 +00:00
|
|
|
const platform: TableOperationsPlatform = {
|
2023-10-12 17:32:22 +00:00
|
|
|
...getErrorPlatform(realTableId),
|
2022-03-15 14:35:15 +00:00
|
|
|
applyUserActions(actions, opts) {
|
|
|
|
if (!activeDoc) { throw new Error('no document'); }
|
|
|
|
return activeDoc.applyUserActions(
|
|
|
|
docSessionFromRequest(req),
|
|
|
|
actions,
|
|
|
|
opts
|
|
|
|
);
|
|
|
|
}
|
|
|
|
};
|
|
|
|
return new TableOperationsImpl(platform, options);
|
|
|
|
}
|
|
|
|
|
|
|
|
async function handleSandboxError<T>(tableId: string, colNames: string[], p: Promise<T>): Promise<T> {
|
|
|
|
return handleSandboxErrorOnPlatform(tableId, colNames, p, getErrorPlatform(tableId));
|
|
|
|
}
|
2022-03-21 20:22:35 +00:00
|
|
|
|
2022-04-28 11:51:55 +00:00
|
|
|
export interface DocApiUsagePeriod {
|
|
|
|
unit: 'day' | 'hour' | 'minute',
|
|
|
|
format: string;
|
|
|
|
periodsPerDay: number;
|
|
|
|
}
|
|
|
|
|
|
|
|
export const docApiUsagePeriods: DocApiUsagePeriod[] = [
|
|
|
|
{
|
|
|
|
unit: 'day',
|
|
|
|
format: 'YYYY-MM-DD',
|
|
|
|
periodsPerDay: 1,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
unit: 'hour',
|
|
|
|
format: 'YYYY-MM-DDTHH',
|
|
|
|
periodsPerDay: 24,
|
|
|
|
},
|
|
|
|
{
|
|
|
|
unit: 'minute',
|
|
|
|
format: 'YYYY-MM-DDTHH:mm',
|
|
|
|
periodsPerDay: 24 * 60,
|
|
|
|
},
|
|
|
|
];
|
|
|
|
|
2022-03-21 20:22:35 +00:00
|
|
|
/**
|
|
|
|
* Returns a key used for redis and a local cache
|
2022-04-28 11:51:55 +00:00
|
|
|
* which store the number of API requests made for the given document in the given period.
|
|
|
|
* The key contains the current UTC date (and maybe hour and minute)
|
|
|
|
* so that counts from previous periods are simply ignored and eventually evicted.
|
2022-03-21 20:22:35 +00:00
|
|
|
* This means that the daily measured usage conceptually 'resets' at UTC midnight.
|
2022-04-28 11:51:55 +00:00
|
|
|
* If `current` is false, returns a key for the next day/hour.
|
2022-03-21 20:22:35 +00:00
|
|
|
*/
|
2022-04-28 11:51:55 +00:00
|
|
|
export function docPeriodicApiUsageKey(docId: string, current: boolean, period: DocApiUsagePeriod, m: moment.Moment) {
|
|
|
|
if (!current) {
|
|
|
|
m = m.clone().add(1, period.unit);
|
|
|
|
}
|
|
|
|
return `doc-${docId}-periodicApiUsage-${m.format(period.format)}`;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Checks whether the doc API usage fits within the daily maximum.
|
|
|
|
* If so, returns an array of keys for each unit of time whose usage should be incremented.
|
|
|
|
* If not, returns undefined.
|
|
|
|
*
|
|
|
|
* Description of the algorithm this is implementing:
|
|
|
|
*
|
|
|
|
* Maintain up to 5 buckets: current day, next day, current hour, next hour, current minute.
|
|
|
|
* For each API request, check in order:
|
|
|
|
* - if current_day < DAILY_LIMIT, allow; increment all 3 current buckets
|
|
|
|
* - else if current_hour < DAILY_LIMIT/24, allow; increment next_day, current_hour, and current_minute buckets.
|
|
|
|
* - else if current_minute < DAILY_LIMIT/24/60, allow; increment next_day, next_hour, and current_minute buckets.
|
|
|
|
* - else reject.
|
|
|
|
* I think it has pretty good properties:
|
|
|
|
* - steady low usage may be maintained even if a burst exhausted the daily limit
|
|
|
|
* - user could get close to twice the daily limit on the first day with steady usage after a burst,
|
|
|
|
* but would then be limited to steady usage the next day.
|
|
|
|
*/
|
|
|
|
export function getDocApiUsageKeysToIncr(
|
|
|
|
docId: string, usage: LRUCache<string, number>, dailyMax: number, m: moment.Moment
|
|
|
|
): string[] | undefined {
|
|
|
|
// Start with keys for the current day, minute, and hour
|
|
|
|
const keys = docApiUsagePeriods.map(p => docPeriodicApiUsageKey(docId, true, p, m));
|
|
|
|
for (let i = 0; i < docApiUsagePeriods.length; i++) {
|
|
|
|
const period = docApiUsagePeriods[i];
|
|
|
|
const key = keys[i];
|
|
|
|
const periodMax = Math.ceil(dailyMax / period.periodsPerDay);
|
|
|
|
const count = usage.get(key) || 0;
|
|
|
|
if (count < periodMax) {
|
|
|
|
return keys;
|
|
|
|
}
|
|
|
|
// Allocation for the current day/hour/minute has been exceeded, increment the next day/hour/minute instead.
|
|
|
|
keys[i] = docPeriodicApiUsageKey(docId, false, period, m);
|
|
|
|
}
|
|
|
|
// Usage exceeded all the time buckets, so return undefined to reject the request.
|
2022-03-21 20:22:35 +00:00
|
|
|
}
|
(core) Adding /webhooks endpoint
Summary:
- New /webhooks event that lists all webhooks in a document (available for owners),
- Monitoring webhooks usage and saving it in memory or Redis,
- Loosening _usubscribe API endpoint, so that the information returned from the /webhook endpoint is enough to unsubscribe,
- Owners can remove webhook without the unsubscribe key.
The endpoint lists all webhooks that are registered in a document, not just webhooks from a single table.
There are two status fields. First for the webhook, second for the last request attempt.
Webhook can have 5 statuses: 'idle', 'sending', 'retrying', 'postponed', 'error', which roughly describes what the
sendLoop is currently doing. The 'error' status describes a situation when all request attempts failed and the queue needs
to be drained, so some requests were dropped.
The last request status can only be: 'success', 'failure' or 'rejected'. Rejected means that the last batch was dropped because the
queue was too long.
Test Plan: New and updated tests
Reviewers: paulfitz
Reviewed By: paulfitz
Differential Revision: https://phab.getgrist.com/D3727
2022-12-13 11:47:50 +00:00
|
|
|
|
|
|
|
export interface WebhookSubscription {
|
|
|
|
unsubscribeKey: string;
|
|
|
|
webhookId: string;
|
|
|
|
}
|
2023-09-09 18:50:32 +00:00
|
|
|
|
|
|
|
/**
|
|
|
|
* Converts `activeDoc` to XLSX and sends the converted data through `res`.
|
|
|
|
*/
|
|
|
|
export async function downloadXLSX(activeDoc: ActiveDoc, req: Request,
|
|
|
|
res: Response, options: DownloadOptions) {
|
|
|
|
const {filename} = options;
|
|
|
|
res.setHeader('Content-Type', 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet');
|
|
|
|
res.setHeader('Content-Disposition', contentDisposition(filename + '.xlsx'));
|
|
|
|
return streamXLSX(activeDoc, req, res, options);
|
|
|
|
}
|