Skip to content

Commit 67fe551

Browse files
gburdCommitfest Bot
authored andcommitted
Use consistent naming of the clock-sweep algorithm.
Minor edits to comments only.
1 parent 317c117 commit 67fe551

File tree

5 files changed

+14
-14
lines changed

5 files changed

+14
-14
lines changed

src/backend/storage/buffer/README

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -211,9 +211,9 @@ Buffer Ring Replacement Strategy
211211
When running a query that needs to access a large number of pages just once,
212212
such as VACUUM or a large sequential scan, a different strategy is used.
213213
A page that has been touched only by such a scan is unlikely to be needed
214-
again soon, so instead of running the normal clock sweep algorithm and
214+
again soon, so instead of running the normal clock-sweep algorithm and
215215
blowing out the entire buffer cache, a small ring of buffers is allocated
216-
using the normal clock sweep algorithm and those buffers are reused for the
216+
using the normal clock-sweep algorithm and those buffers are reused for the
217217
whole scan. This also implies that much of the write traffic caused by such
218218
a statement will be done by the backend itself and not pushed off onto other
219219
processes.

src/backend/storage/buffer/bufmgr.c

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3608,7 +3608,7 @@ BufferSync(int flags)
36083608
* This is called periodically by the background writer process.
36093609
*
36103610
* Returns true if it's appropriate for the bgwriter process to go into
3611-
* low-power hibernation mode. (This happens if the strategy clock sweep
3611+
* low-power hibernation mode. (This happens if the strategy clock-sweep
36123612
* has been "lapped" and no buffer allocations have occurred recently,
36133613
* or if the bgwriter has been effectively disabled by setting
36143614
* bgwriter_lru_maxpages to 0.)
@@ -3658,7 +3658,7 @@ BgBufferSync(WritebackContext *wb_context)
36583658
uint32 new_recent_alloc;
36593659

36603660
/*
3661-
* Find out where the freelist clock sweep currently is, and how many
3661+
* Find out where the freelist clock-sweep currently is, and how many
36623662
* buffer allocations have happened since our last call.
36633663
*/
36643664
strategy_buf_id = StrategySyncStart(&strategy_passes, &recent_alloc);
@@ -3679,8 +3679,8 @@ BgBufferSync(WritebackContext *wb_context)
36793679

36803680
/*
36813681
* Compute strategy_delta = how many buffers have been scanned by the
3682-
* clock sweep since last time. If first time through, assume none. Then
3683-
* see if we are still ahead of the clock sweep, and if so, how many
3682+
* clock-sweep since last time. If first time through, assume none. Then
3683+
* see if we are still ahead of the clock-sweep, and if so, how many
36843684
* buffers we could scan before we'd catch up with it and "lap" it. Note:
36853685
* weird-looking coding of xxx_passes comparisons are to avoid bogus
36863686
* behavior when the passes counts wrap around.

src/backend/storage/buffer/freelist.c

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ typedef struct
3333
slock_t buffer_strategy_lock;
3434

3535
/*
36-
* Clock sweep hand: index of next buffer to consider grabbing. Note that
36+
* clock-sweep hand: index of next buffer to consider grabbing. Note that
3737
* this isn't a concrete buffer - we only ever increase the value. So, to
3838
* get an actual buffer, it needs to be used modulo NBuffers.
3939
*/
@@ -51,7 +51,7 @@ typedef struct
5151
* Statistics. These counters should be wide enough that they can't
5252
* overflow during a single bgwriter cycle.
5353
*/
54-
uint32 completePasses; /* Complete cycles of the clock sweep */
54+
uint32 completePasses; /* Complete cycles of the clock-sweep */
5555
pg_atomic_uint32 numBufferAllocs; /* Buffers allocated since last reset */
5656

5757
/*
@@ -311,7 +311,7 @@ StrategyGetBuffer(BufferAccessStrategy strategy, uint32 *buf_state, bool *from_r
311311
}
312312
}
313313

314-
/* Nothing on the freelist, so run the "clock sweep" algorithm */
314+
/* Nothing on the freelist, so run the "clock-sweep" algorithm */
315315
trycounter = NBuffers;
316316
for (;;)
317317
{
@@ -511,7 +511,7 @@ StrategyInitialize(bool init)
511511
StrategyControl->firstFreeBuffer = 0;
512512
StrategyControl->lastFreeBuffer = NBuffers - 1;
513513

514-
/* Initialize the clock sweep pointer */
514+
/* Initialize the clock-sweep pointer */
515515
pg_atomic_init_u32(&StrategyControl->nextVictimBuffer, 0);
516516

517517
/* Clear statistics */
@@ -759,7 +759,7 @@ GetBufferFromRing(BufferAccessStrategy strategy, uint32 *buf_state)
759759
*
760760
* If usage_count is 0 or 1 then the buffer is fair game (we expect 1,
761761
* since our own previous usage of the ring element would have left it
762-
* there, but it might've been decremented by clock sweep since then). A
762+
* there, but it might've been decremented by clock-sweep since then). A
763763
* higher usage_count indicates someone else has touched the buffer, so we
764764
* shouldn't re-use it.
765765
*/

src/backend/storage/buffer/localbuf.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -229,7 +229,7 @@ GetLocalVictimBuffer(void)
229229
ResourceOwnerEnlarge(CurrentResourceOwner);
230230

231231
/*
232-
* Need to get a new buffer. We use a clock sweep algorithm (essentially
232+
* Need to get a new buffer. We use a clock-sweep algorithm (essentially
233233
* the same as what freelist.c does now...)
234234
*/
235235
trycounter = NLocBuffer;

src/include/storage/buf_internals.h

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -80,8 +80,8 @@ StaticAssertDecl(BUF_REFCOUNT_BITS + BUF_USAGECOUNT_BITS + BUF_FLAG_BITS == 32,
8080
* The maximum allowed value of usage_count represents a tradeoff between
8181
* accuracy and speed of the clock-sweep buffer management algorithm. A
8282
* large value (comparable to NBuffers) would approximate LRU semantics.
83-
* But it can take as many as BM_MAX_USAGE_COUNT+1 complete cycles of
84-
* clock sweeps to find a free buffer, so in practice we don't want the
83+
* But it can take as many as BM_MAX_USAGE_COUNT+1 complete cycles of the
84+
* clock-sweep hand to find a free buffer, so in practice we don't want the
8585
* value to be very large.
8686
*/
8787
#define BM_MAX_USAGE_COUNT 5

0 commit comments

Comments
 (0)