[cherry-pick](branch-3.0) Pick "[Enhancement](compaction) Try get global lock when execute compaction (#49882)" #50432
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Pick #49882
Background:
In cloud mode, compaction tasks for the same tablet may be scheduled across multiple BEs. To ensure that only one BE can execute a compaction task for a given tablet at a time, a global locking mechanism is used.
During compaction preparation, tablet and compaction information is written as key-value pairs to the metadata service. A background thread periodically renews the lease. Other BEs can only perform compaction on a tablet when the KV entry has expired or doesn't exist, ensuring that a tablet's compaction occurs on only one BE at a time.
Problem:
Compaction tasks are processed through a thread pool. Currently, we first prepare compaction and acquire the global lock before queueing the task. If a BE is under heavy compaction pressure with all threads occupied, tablets may wait in the queue for extended periods. Meanwhile, other idle BEs cannot perform compaction on these tablets because they cannot acquire the global lock, leading to resource imbalance with some BEs starved and others overloaded.
Solution:
To address this issue, we'll modify the workflow to queue tasks first, then attempt to acquire the lock only when the task is about to be executed. This ensures that even if a tablet's compaction task is queued on one BE, another idle BE can still perform compaction on that tablet, resulting in better resource utilization across the cluster.
What problem does this PR solve?
Issue Number: close #xxx
Related PR: #xxx
Problem Summary:
Release note
None
Check List (For Author)
Test
Behavior changed:
Does this need documentation?
Check List (For Reviewer who merge this PR)