(core) fix sync to s3 when doc is marked as dirty but proves to be clean

Summary:
This fixes a two problems:
 * A mistake in `KeyedMutex.runExclusive`.
 * Logic about saving a document to s3 when the document is found to match what is already there.

`HostedStorageManager.flushDoc` could get caught in a loop if a document was uploaded to s3 and then, without any change to it, marked as dirty.  Low level code would detect there was no change and skip the upload; but then the snapshotId could be unknown, causing an error and retries. This diff fixes that problem by discovering the snapshotId on downloads and tracking it. It also corrects a mutex problem that may have been creating the scenario. A small delay is added to `flushDoc` to mitigate the effect of similar problems in future. Exponential backoff would be good, but `flushDoc` is called in some situations where long delays would negatively impact worker shutdown or user work.

Test Plan: added tests

Reviewers: dsagal

Reviewed By: dsagal

Differential Revision: https://phab.getgrist.com/D2654
This commit is contained in:
Paul Fitzpatrick
2020-11-09 22:28:30 -05:00
parent 6d95418cc1
commit e30d0fd5d0
4 changed files with 32 additions and 23 deletions

View File

@@ -196,9 +196,9 @@ export class DocSnapshotInventory implements IInventory {
log.error(`Surprise in getSnapshots, expected ${expectSnapshotId} for ${key} ` +
`but got ${data[0]?.snapshotId}`);
}
// Reconstructed data is precious. Save it to S3 and local cache.
// Reconstructed data is precious. Make sure it gets saved.
await this._saveToFile(fname, data);
await this._meta.upload(key, fname);
this._needFlush.add(key);
}
}
return data;