Select query too slow

hi. team

Small size of Table query is Too Slow.
The vacancy_room_meta table is updated every hour. Can I fix it by reducing the gc.ttlseconds time?
Is it related to Garbage collection?
I don’t understand that the speed of a table with 60,000 data is slow.

ddnayo @10 .212. 100.33 : 32445 /ddnayo> select count(*) from vacancy_room_meta;
count
+-------+
63083

select * from vacancy_room_meta limit 1000
(1000 rows)

Time: 1m23.628851616s <============== Too Slow…

ddnayo@10.212.100.33:32445/ddnayo> select version();
version
±----------------------------------------------------------------------------------------+
CockroachDB CCL v20.1.6 (x86_64-unknown-linux-gnu, built 2020/09/24 18:16:45, go1.13.9)
(1 row)

Time: 6.070867ms

— Table Schema
vacancy_room_meta | CREATE TABLE vacancy_room_meta (
| vacancy_room_meta_id INT8 NOT NULL DEFAULT unique_rowid(),
| product_no INT4 NOT NULL,
| accommodation_id INT4 NOT NULL,
| accommodation_name STRING(50) NOT NULL,
| addr1 STRING(15) NOT NULL,
| addr2 STRING(30) NOT NULL,
| agent_reservation_count INT4 NOT NULL,
| recommend_click_count INT4 NOT NULL,
| click_count INT4 NOT NULL,
| recommend_count INT4 NOT NULL,
| image_name STRING(50) NOT NULL,
| image_code STRING(4) NOT NULL,
| image_reference_id INT4 NOT NULL,
| room_id INT4 NOT NULL,
| room_type_id INT4 NOT NULL,
| room_name STRING(50) NOT NULL,
| space INT2 NOT NULL,
| structure STRING(100) NOT NULL,
| room_structure STRING(500) NOT NULL,
| standard_occupancy INT2 NOT NULL,
| max_occupancy INT2 NOT NULL,
| priority INT2 NOT NULL,
| room_type_code STRING(4) NOT NULL,
| room_type_code_name STRING(50) NOT NULL,
| room_quantity INT2 NOT NULL,
| restroom_quantity INT2 NOT NULL,
| room_category_codes STRING(500) NOT NULL,
| category_codes STRING(500) NOT NULL,
| CONSTRAINT “primary” PRIMARY KEY (vacancy_room_meta_id ASC),
| INDEX ix_vacancy_room_meta_01 (room_category_codes ASC),
| INDEX ix_vacancy_room_meta_02 (room_type_id ASC),
| INDEX ix_vacancy_room_meta_03 (room_id ASC),
| INDEX ix_vacancy_room_meta_04 (recommend_click_count DESC)

hi @jubong ! I’m not too sure why it’s slow yet, but interesting idea about gc.ttlseconds. Why do you think that could be related? How many DELETEs are executed on this table?

hi @rafiss

i found this document.

The number of seconds overwritten values will be retained before garbage collection.
Smaller values can save disk space if values are frequently overwritten; larger values increase the range allowed for AS OF SYSTEM TIME queries,
also know as Time Travel Queries.It is not recommended to set this below 600 (10 minutes);
doing so will cause problems for long-running queries. Also, since all versions of a row are stored in a single range that never splits,
it is not recommended to set this so high that all the changes to a row in that time period could add up to more than 512 MiB;
such oversized ranges could contribute to the server running out of memory or other problems.Default: 90000 (25 hours)

All data is updated every hour in the vacancy_room_meta table, and data is not deleted.

I created a new table vacancy_room_meta_new by duplicating the vacancy_room_meta table.
select * from vacancy_room_meta_new limit 1000; Execution result is searched in less than 100ms.

I see – yes if you are overwriting all the data with the same keys, then the gc.ttlseconds setting is relevant for that table. Because CockroachDB uses multiversion concurrency control, when data is written for the same key, the old value does not get replaced; instead it is marked as stale and then garbage collected later. The old value is still on disk, and a scan like the one in your query still has to iterate through all the old values on disk until it finds 1000 non-stale entries.

When you copy the data to a new table, none of these stale values are copied over, so the scan can work much more quickly.

For this table, if you need to optimize a query like select * from vacancy_room_meta limit 1000, then you can try setting a lower gc.ttlseconds, like: ALTER TABLE vacancy_room_meta CONFIGURE ZONE USING gc.ttlseconds = 4200. Just be aware that this will cause more cluster resources to be used for the background garbage collection job, and that this will could cause problems for long operations like BACKUPs.

After performing garbage collection (25 hours), the speed of the select was remeasured, but it is still slow.

Could you try running: SHOW RANGES FROM TABLE vacancy_room_meta

It’s possible the GC job has not completed. The range report here would show how much space is still being used.

Also, it would be good to compare this to: SHOW RANGES FROM TABLE vacancy_room_meta_new; so that way you can see how a table with no garbage in it is using.

ddnayo> SHOW RANGES FROM TABLE vacancy_room_meta;
start_key | end_key | range_id | range_size_mb | lease_holder | lease_holder_locality | replicas | replica_localities
±--------------------±--------------------±---------±--------------±-------------±----------------------±---------±-------------------+
NULL | /624049444451221507 | 2428998 | 0.301956 | 2 | | {1,2,3} | {"","",""}
/624049444451221507 | NULL | 2701996 | 23.508795 | 2 | | {1,2,3} | {"","",""}
(2 rows)

Time: 266.296443ms

ddnayo> SHOW RANGES FROM TABLE vacancy_room_meta_new;
start_key | end_key | range_id | range_size_mb | lease_holder | lease_holder_locality | replicas | replica_localities
±--------------------±--------------------±---------±--------------±-------------±----------------------±---------±-------------------+
NULL | /625391328150847491 | 2702004 | 0.337035 | 1 | | {1,2,3} | {"","",""}
/625391328150847491 | NULL | 2702473 | 19.26641 | 2 | | {1,2,3} | {"","",""}
(2 rows)

Could you try using the cockroach node status --stats <url> command line tool? It should show you live_bytes, key_bytes, and value_bytes columns. If live_bytes is small but key_bytes + value_bytes is large, it means there is still a lot of garbage in the cluster.