# pg-recover > ⚠️ **DO NOT DO THIS…** well ever really, but especially on a server with failing disks. > This is done on a server with perfectly fine disks, but corrupted Postgres blocks. A dirty, terrible, dangerous Postgres recovery tool. This is designed to recover as much data as possible from a Postgres table with bad disk blocks BUT good physical disks (say after an unclean exit, or you've snapshotted bad disks and mounted them on a good server). It does this by creating a clean "recovery" table with the same schema and reading rows from the bad table one at a time, skipping rows and blocks with bad disk errors. For more specifics on how this works technically, see [my blog post](https://garrettmills.dev/blog/2025/01/11/Salvaging-a-Corrupted-Table-from-PostgreSQL/). Requirements: - Bash + tools (cat, tac, cut, sed, &c.) - Postgres client (`psql` and `pg_dump`) - Table must have a `SERIAL` primary key and at least one other non-`NULL` column Usage: ```text USAGE: pg-recover.sh [=500] [] user - the Postgres user to connect with host - the Postgres server host database - the Postgres database table - the Postgres table primary key - the name of the SERIAL primary key column nonnull col - the name of a DIFFERENT non-null column on the table commit size - how many rows to recover before committing the transaction (default: 500) start at - start at the specific primary key (descending) Copyright (c) 2025 Garrett Mills https://code.garrettmills.dev/garrettmills/pg-recover ``` Once the script finishes, you can import the recovered data like so: ```shell psql [...] < pqr-final-attempt.sql ``` This will create a new table `_recovery` with the recovered data. License: See the `LICENSE` file.