-
Notifications
You must be signed in to change notification settings - Fork 3.4k
[fix](cloud) batch process ttl cache block gc to limit lock held time once in a time #50387
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
too many ttl cache blocks gc will burst the cache lock latency and thus affect the query latency. limit them into batches to unleash the lock. Signed-off-by: zhengyu <[email protected]>
Thank you for your contribution to Apache Doris. Please clearly describe your PR:
|
run buildall |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR approved by at least one committer and no changes requested. |
PR approved by anyone and no changes requested. |
TPC-H: Total hot run time: 34231 ms
|
TPC-DS: Total hot run time: 185546 ms
|
ClickBench: Total hot run time: 29.84 s
|
BE UT Coverage ReportIncrement line coverage Increment coverage report
|
BE Regression P0 && UT Coverage ReportIncrement line coverage Increment coverage report
|
too many ttl cache blocks gc will burst the cache lock latency and thus affect the query latency. limit them into batches to unleash the lock.
What problem does this PR solve?
Issue Number: close #xxx
Related PR: #xxx
Problem Summary:
Release note
None
Check List (For Author)
Test
Behavior changed:
Does this need documentation?
Check List (For Reviewer who merge this PR)