(core) fix sync to s3 when doc is marked as dirty but proves to be clean

Summary:
This fixes a two problems:
 * A mistake in `KeyedMutex.runExclusive`.
 * Logic about saving a document to s3 when the document is found to match what is already there.

`HostedStorageManager.flushDoc` could get caught in a loop if a document was uploaded to s3 and then, without any change to it, marked as dirty.  Low level code would detect there was no change and skip the upload; but then the snapshotId could be unknown, causing an error and retries. This diff fixes that problem by discovering the snapshotId on downloads and tracking it. It also corrects a mutex problem that may have been creating the scenario. A small delay is added to `flushDoc` to mitigate the effect of similar problems in future. Exponential backoff would be good, but `flushDoc` is called in some situations where long delays would negatively impact worker shutdown or user work.

Test Plan: added tests

Reviewers: dsagal

Reviewed By: dsagal

Differential Revision: https://phab.getgrist.com/D2654
This commit is contained in:
Paul Fitzpatrick
2020-11-09 22:28:30 -05:00
parent 6d95418cc1
commit e30d0fd5d0
4 changed files with 32 additions and 23 deletions

View File

@@ -12,7 +12,7 @@ export class KeyedMutex {
if (!this._mutexes.has(key)) {
this._mutexes.set(key, new Mutex());
}
const mutex = this._mutexes.get(key)!
const mutex = this._mutexes.get(key)!;
const unlock = await mutex.acquire();
return () => {
unlock();
@@ -27,7 +27,7 @@ export class KeyedMutex {
public async runExclusive<T>(key: string, callback: MutexInterface.Worker<T>): Promise<T> {
const unlock = await this.acquire(key);
try {
return callback();
return await callback();
} finally {
unlock();
}