Skip to content

Commit 0042f13

Browse files
fjlkaralabe
authored andcommitted
eth/downloader: separate state sync from queue (ethereum#14460)
* eth/downloader: separate state sync from queue Scheduling of state node downloads hogged the downloader queue lock when new requests were scheduled. This caused timeouts for other requests. With this change, state sync is fully independent of all other downloads and doesn't involve the queue at all. State sync is started and checked on in processContent. This is slightly awkward because processContent doesn't have a select loop. Instead, the queue is closed by an auxiliary goroutine when state sync fails. We tried several alternatives to this but settled on the current approach because it's the least amount of change overall. Handling of the pivot block has changed slightly: the queue previously prevented import of pivot block receipts before the state of the pivot block was available. In this commit, the receipt will be imported before the state. This causes an annoyance where the pivot block is committed as fast block head even when state downloads fail. Stay tuned for more updates in this area ;) * eth/downloader: remove cancelTimeout channel * eth/downloader: retry state requests on timeout * eth/downloader: improve comment * eth/downloader: mark peers idle when state sync is done * eth/downloader: move pivot block splitting to processContent This change also ensures that pivot block receipts aren't imported before the pivot block itself. * eth/downloader: limit state node retries * eth/downloader: improve state node error handling and retry check * eth/downloader: remove maxStateNodeRetries It fails the sync too much. * eth/downloader: remove last use of cancelCh in statesync.go Fixes TestDeliverHeadersHang*Fast and (hopefully) the weird cancellation behaviour at the end of fast sync. * eth/downloader: fix leak in runStateSync * eth/downloader: don't run processFullSyncContent in LightSync mode * eth/downloader: improve comments * eth/downloader: fix vet, megacheck * eth/downloader: remove unrequested tasks anyway * eth/downloader, trie: various polishes around duplicate items This commit explicitly tracks duplicate and unexpected state delieveries done against a trie Sync structure, also adding there to import info logs. The commit moves the db batch used to commit trie changes one level deeper so its flushed after every node insertion. This is needed to avoid a lot of duplicate retrievals caused by inconsistencies between Sync internals and database. A better approach is to track not-yet-written states in trie.Sync and flush on commit, but I'm focuing on correctness first now. The commit fixes a regression around pivot block fail count. The counter previously was reset to 1 if and only if a sync cycle progressed (inserted at least 1 entry to the database). The current code reset it already if a node was delivered, which is not stong enough, because unless it ends up written to disk, an attacker can just loop and attack ad infinitum. The commit also fixes a regression around state deliveries and timeouts. The old downloader tracked if a delivery is stale (none of the deliveries were requestedt), in which case it didn't mark the node idle and did not send further requests, since it signals a past timeout. The current code did mark it idle even on stale deliveries, which eventually caused two requests to be in flight at the same time, making the deliveries always stale and mass duplicating retrievals between multiple peers. * eth/downloader: fix state request leak This commit fixes the hang seen sometimes while doing the state sync. The cause of the hang was a rare combination of events: request state data from peer, peer drops and reconnects almost immediately. This caused a new download task to be assigned to the peer, overwriting the old one still waiting for a timeout, which in turned leaked the requests out, never to be retried. The fix is to ensure that a task assignment moves any pending one back into the retry queue. The commit also fixes a regression with peer dropping due to stalls. The current code considered a peer stalling if they timed out delivering 1 item. However, the downloader never requests only one, the minimum is 2 (attempt to fine tune estimated latency/bandwidth). The fix is simply to drop if a timeout is detected at 2 items. Apart from the above bugfixes, the commit contains some code polishes I made while debugging the hang. * core, eth, trie: support batched trie sync db writes * trie: rename SyncMemCache to syncMemBatch
1 parent 58a1e13 commit 0042f13

File tree

9 files changed

+769
-479
lines changed

9 files changed

+769
-479
lines changed

core/state/sync.go

Lines changed: 9 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -59,10 +59,16 @@ func (s *StateSync) Missing(max int) []common.Hash {
5959
}
6060

6161
// Process injects a batch of retrieved trie nodes data, returning if something
62-
// was committed to the database and also the index of an entry if processing of
62+
// was committed to the memcache and also the index of an entry if processing of
6363
// it failed.
64-
func (s *StateSync) Process(list []trie.SyncResult, dbw trie.DatabaseWriter) (bool, int, error) {
65-
return (*trie.TrieSync)(s).Process(list, dbw)
64+
func (s *StateSync) Process(list []trie.SyncResult) (bool, int, error) {
65+
return (*trie.TrieSync)(s).Process(list)
66+
}
67+
68+
// Commit flushes the data stored in the internal memcache out to persistent
69+
// storage, returning th enumber of items written and any occurred error.
70+
func (s *StateSync) Commit(dbw trie.DatabaseWriter) (int, error) {
71+
return (*trie.TrieSync)(s).Commit(dbw)
6672
}
6773

6874
// Pending returns the number of state entries currently pending for download.

core/state/sync_test.go

Lines changed: 20 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -138,9 +138,12 @@ func testIterativeStateSync(t *testing.T, batch int) {
138138
}
139139
results[i] = trie.SyncResult{Hash: hash, Data: data}
140140
}
141-
if _, index, err := sched.Process(results, dstDb); err != nil {
141+
if _, index, err := sched.Process(results); err != nil {
142142
t.Fatalf("failed to process result #%d: %v", index, err)
143143
}
144+
if index, err := sched.Commit(dstDb); err != nil {
145+
t.Fatalf("failed to commit data #%d: %v", index, err)
146+
}
144147
queue = append(queue[:0], sched.Missing(batch)...)
145148
}
146149
// Cross check that the two states are in sync
@@ -168,9 +171,12 @@ func TestIterativeDelayedStateSync(t *testing.T) {
168171
}
169172
results[i] = trie.SyncResult{Hash: hash, Data: data}
170173
}
171-
if _, index, err := sched.Process(results, dstDb); err != nil {
174+
if _, index, err := sched.Process(results); err != nil {
172175
t.Fatalf("failed to process result #%d: %v", index, err)
173176
}
177+
if index, err := sched.Commit(dstDb); err != nil {
178+
t.Fatalf("failed to commit data #%d: %v", index, err)
179+
}
174180
queue = append(queue[len(results):], sched.Missing(0)...)
175181
}
176182
// Cross check that the two states are in sync
@@ -206,9 +212,12 @@ func testIterativeRandomStateSync(t *testing.T, batch int) {
206212
results = append(results, trie.SyncResult{Hash: hash, Data: data})
207213
}
208214
// Feed the retrieved results back and queue new tasks
209-
if _, index, err := sched.Process(results, dstDb); err != nil {
215+
if _, index, err := sched.Process(results); err != nil {
210216
t.Fatalf("failed to process result #%d: %v", index, err)
211217
}
218+
if index, err := sched.Commit(dstDb); err != nil {
219+
t.Fatalf("failed to commit data #%d: %v", index, err)
220+
}
212221
queue = make(map[common.Hash]struct{})
213222
for _, hash := range sched.Missing(batch) {
214223
queue[hash] = struct{}{}
@@ -249,9 +258,12 @@ func TestIterativeRandomDelayedStateSync(t *testing.T) {
249258
}
250259
}
251260
// Feed the retrieved results back and queue new tasks
252-
if _, index, err := sched.Process(results, dstDb); err != nil {
261+
if _, index, err := sched.Process(results); err != nil {
253262
t.Fatalf("failed to process result #%d: %v", index, err)
254263
}
264+
if index, err := sched.Commit(dstDb); err != nil {
265+
t.Fatalf("failed to commit data #%d: %v", index, err)
266+
}
255267
for _, hash := range sched.Missing(0) {
256268
queue[hash] = struct{}{}
257269
}
@@ -283,9 +295,12 @@ func TestIncompleteStateSync(t *testing.T) {
283295
results[i] = trie.SyncResult{Hash: hash, Data: data}
284296
}
285297
// Process each of the state nodes
286-
if _, index, err := sched.Process(results, dstDb); err != nil {
298+
if _, index, err := sched.Process(results); err != nil {
287299
t.Fatalf("failed to process result #%d: %v", index, err)
288300
}
301+
if index, err := sched.Commit(dstDb); err != nil {
302+
t.Fatalf("failed to commit data #%d: %v", index, err)
303+
}
289304
for _, result := range results {
290305
added = append(added, result.Hash)
291306
}

0 commit comments

Comments
 (0)