Delete only first 50G of rows

We have a 9xx G table which we need to trim down in size. When we started the delete on DB table, the process continued for full day causing strain to our whole system.
To delete rows and considerable GB of data, we wanted to evaluate following things

  1. Is there a way i can choose 50G of data to delete and commit it?
  2. Is there a way i can know size of each row?
  3. Is there a way i can choose chunks of data to delete/commit without overwhleming the whole set of nodes?
    Any answers appreciated
    Thanks
  1. You can delete repeatedly small chunks of the table by using delete ... where ...
    The where condition should restrict the rows to a smaller set that can be deleted in reasonable time (so it won’t affect you normal operations). You can also use a limit clause (Limiting Query Results | CockroachDB Docs) to restrict the number of rows.
  2. You can use pg_columns_size to find out the size of an encoded value (https://pgpedia.info/p/pg_column_size.html). A faster way to get an average row size is to do select sum(range_size_mb) from [show ranges from table <table>]; and then divide by the number of rows in that table.
  3. Choosing small chunks of data to delete is the best approach here. The size and the frequency of batches being deleted will determine how much interference will be between the deletes and the rest.