Batch load problems (include file)

Attempting to load a moderate database for testing, and running in problems, some appear to have been reported before but resolved/closed. Trying to get those into reproducible issues, although breaking each of the huge import lines into 35 pieces seems to have helped a lot…

Just running a single connection to a single server in cluster of 3, each with 8gb of ram and database is only a few hundred mb…
First time I had stalls and one of the servers crashed because the process grew over 14gb… The export is a little nasty in that each line was about 1mb inserting multiple rows at once (on huge INSERT /line), no transactions. (Not my idea, that’s the way the test db comes…). Breaking that up to 500 rows/line and each line under 32k seems to have helped performance and other issues a lot…

However, the import file fails with:
root@10.0.4.91:26257/> | cat load_employees.dump
driver: bad connection
connection lost; opening new connection and resetting session parameters…

(above works with small files)

That file that fails is 17mb (rewritten into INSERT lines <30k), and loads fine with
cockroach sql --host=10.0.4.91 <load_employees.dump

Initial testing was with the previous week beta. Above fails with beta-20170420.

^ that’s usual a sign of the server having crashed. Is that indeed what happens? If so, can you please look for a panic and stack trace at the end of the crdb’s log file?

The server doesn’t crash. I think it’s more likely a client issue. It’s easy to reproduce, but you need a large import file. I can upload the test 17mb file I am using somewhere if it will help reproduce.

@jlauro - it would be great if you could create a GitHub issue and share any relevant information. We’ll have someone look into it.