Is it possible to use Enterprise Backup to a local file?

I just got a trial license of Enterprise CRDB. I wanted to use Enterprise Backup because Core Backup does not have an Incremental backup feature.
From the CRDB tutorials, I can see examples to use Enterprise BACKUP to back up to a cloud file system, such as AWS S3, Google Could Storage and NFS. However, I have neither of them. Is it possible to back up to the local file system using Enterprise BACKUP? Otherwise, why don’t you support backup to local FS?

I am a CockroachDB user too :grin:

If you have no cloud storage or NFS and you are not planning to build an NFS cluster , there are another way to let you use local file to restore data .

You can put the backup file in the ‘extern’ directory under the data directory of each node. Then you can still using ‘nodelocal:///xxxx/my-backupfile’ to visit your backup file.(It would be kind of complicated If you have too many nodes)

It works when using import command .If you don’t know where each node’s data directory is,you can run the command directly then the error information will show you.

It is not a good idea to do like that. Using cloud storage or NFS is recommended.

As @xiaolanzao66 pointed out above, you can use the nodelocal:/// scheme to write to the local filesystem – the path is relative to the “External IO Directory” which is controlled by the --external-io-dir flag and defaults to the extern subdirectory of the store directory.

Also, as they mentioned, if you have more than one node, this doesn’t make much sense unless that directory is a central NFS mount – each node will write its portion of the backup to its local disk, but that leaves pieces of the backup fragmented across all the nodes. To be restored, a the complete backup must be readable by every node in the restoring cluster, so leaving pieces of it on different nodes’ local disk does not work (you could manually copy them to a central node, but this seems fragile).

Thus, for multi-node clusters, we only recommend some sort of centralized storage – i.e. a cloud provider or NFS share or other file server – to which all nodes have the same access.

I tried your solution with

backup database “db_name” to ‘nodelocal:///data.sql’;
And I finally found it in
find / -name data.sql
/mnt/ssd-0/extern/data.sql

Thanks.

I am trying to test the INCREMENTAL BACKUP. However, it was not quite successful when backing up two tables in a database.
The database “test_db” contains two tables “backup_test” and “config”.
I executed the following command:

backup database “test_db” to ‘nodelocal:///01142019-1/backup_test_1’;
drop database “test_db” cascade;
// verify database deleted
restore database “test_db” from ‘nodelocal:///01142019-1/backup_test_1’;
// verify database was restored successfully
backup database “test_db” to ‘nodelocal:///01142019-1/backup_test_2’ incremental from ‘nodelocal:///01142019-1/backup_test_1’;

Then I got the error below:

pq: previous backup does not contain table “config”

It is so weird, the previous full backup ‘nodelocal:///01142019-1/backup_test_1’ for sure contains “config” table. Could you explain why did it happen?