Postgresql CopyInSchema error

I have a Go client using the CopyInSchema to import large datasets, like this:

stmt, err := txn.Prepare(pq.CopyInSchema(*schema, *table, hdrs…))

A smaller one worked OK with about 96K rows.

A larger one with almost 400k rows failed with this message:

pq: kv/txn_coord_sender.go:926: transaction is too large to commit: 100349 intents

I had already tested much larger datasets using Postgresql itself. Perhaps in PG the transaction size is handled differently…

Should this work? or is there a way to make it work?

Thanks in advance,
Cecil

@mandolyte What version of cockroach are you running?

@bram I am using the version referenced on getting started docs. Which I think is 1.1.5

@mandolyte

In v1.1.x, there was a limit to the size of the transaction. That limit has been removed and is only memory bound in 2.x.

Can you decrease the amount of data your trying to copy in? If split into smaller transactions, you should be fine.

Let me know how it goes.

Forgot to reply sorry. I already know it works for smaller datasets. I’ll try v2 at some point and repeat the tests. I doubt that PG is limited by available memory, since I have used CopyInSchema to import 750M rows in the past. Using Greenplum (fork of PG 8.x), it took about 3 hours. I also became aware of your own IMPORT tool. So I may give that a try too.

Thanks for the follow-up!