I have a Go client using the CopyInSchema to import large datasets, like this:
stmt, err := txn.Prepare(pq.CopyInSchema(*schema, *table, hdrs…))
A smaller one worked OK with about 96K rows.
A larger one with almost 400k rows failed with this message:
pq: kv/txn_coord_sender.go:926: transaction is too large to commit: 100349 intents
I had already tested much larger datasets using Postgresql itself. Perhaps in PG the transaction size is handled differently…
Should this work? or is there a way to make it work?
Thanks in advance,
Cecil