@@ -14,6 +14,8 @@ fail to handle concurrent input and output.
14
14
This version has the following changes:
15
15
* I/O can optionally be handled at a higher priority than other coroutines
16
16
[ PR287] ( https://github.com/micropython/micropython-lib/pull/287 ) .
17
+ * Tasks can yield with low priority, running when nothing else is pending.
18
+ * Callbacks can similarly be scheduled with low priority.
17
19
* The bug with read/write device drivers is fixed (forthcoming PR).
18
20
* An assertion failure is produced if ` create_task ` or ` run_until_complete `
19
21
is called with a generator function [ PR292] ( https://github.com/micropython/micropython-lib/pull/292 ) .
@@ -31,7 +33,7 @@ with minimum latency. Consequently `asyncio_priority.py` is obsolete and should
31
33
be deleted from your system.
32
34
33
35
The facility for low priority coros formerly provided by ` asyncio_priority.py `
34
- exists but is not yet documented .
36
+ is now implemented .
35
37
36
38
This modified version also provides for ultra low power consumption using a
37
39
module documented [ here] ( ./lowpower/README.md ) .
@@ -47,6 +49,12 @@ module documented [here](./lowpower/README.md).
47
49
2.2 [ Timing accuracy] ( ./FASTPOLL.md#22-timing-accuracy )
48
50
2.3 [ Polling in uasyncio] ( ./FASTPOLL.md#23-polling-in-usayncio )
49
51
3 . [ The modified version] ( ./FASTPOLL.md#3-the-modified-version )
52
+ 3.1 [ Fast I/O] ( ./FASTPOLL.md#31-fast-I/O )
53
+ 3.2 [ Low Priority] ( ./FASTPOLL.md#32-low-priority )
54
+ 3.3 [ Other Features] ( ./FASTPOLL.md#33-other-features )
55
+ 3.4 [ Low priority yield] ( ./FASTPOLL.md#34-low-priority-yield )
56
+ 3.4.1 [ Task Cancellation and Timeouts] ( ./FASTPOLL.md#341-task-cancellation-and-timeouts )
57
+ 3.5 [ Low priority callbacks] ( ./FASTPOLL.md#35-low-priority-callbacks )
50
58
4 . [ ESP Platforms] ( ./FASTPOLL.md#4-esp-platforms )
51
59
5 . [ Background] ( ./FASTPOLL.md#4-background )
52
60
@@ -67,6 +75,8 @@ The benchmarks directory contains files demonstrating the performance gains
67
75
offered by prioritisation. They also offer illustrations of the use of these
68
76
features. Documentation is in the code.
69
77
78
+ * ` benchmarks/latency.py ` Shows the effect on latency with and without low
79
+ priority usage.
70
80
* ` benchmarks/rate.py ` Shows the frequency with which uasyncio schedules
71
81
minimal coroutines (coros).
72
82
* ` benchmarks/rate_esp.py ` As above for ESP32 and ESP8266.
@@ -77,9 +87,12 @@ features. Documentation is in the code.
77
87
* ` fast_io/pin_cb.py ` Demo of an I/O device driver which causes a pin state
78
88
change to trigger a callback.
79
89
* ` fast_io/pin_cb_test.py ` Demo of above.
90
+ * ` benchmarks/call_lp.py ` Demos low priority callbacks.
91
+ * ` benchmarks/overdue.py ` Demo of maximum overdue feature.
92
+ * ` priority_test.py ` Cancellation of low priority coros.
80
93
81
- With the exception of ` rate_fastio ` , benchmarks can be run against the official
82
- and priority versions of usayncio.
94
+ With the exceptions of ` call_lp ` , ` priority ` and ` rate_fastio ` , benchmarks can
95
+ be run against the official and priority versions of usayncio.
83
96
84
97
# 2. Rationale
85
98
@@ -99,6 +112,12 @@ on every iteration of the scheduler. This enables faster response to real time
99
112
events and also enables higher precision millisecond-level delays to be
100
113
realised.
101
114
115
+ It also enables coros to yield control in a way which prevents them from
116
+ competing with coros which are ready for execution. Coros which have yielded in
117
+ a low priority fashion will not be scheduled until all "normal" coros are
118
+ waiting on a nonzero timeout. The benchmarks show that the improvement can
119
+ exceed two orders of magnitude.
120
+
102
121
## 2.1 Latency
103
122
104
123
Coroutines in uasyncio which are pending execution are scheduled in a "fair"
@@ -129,9 +148,20 @@ sufficient to avoid overruns.
129
148
In this version ` handle_isr() ` would be rewritten as a stream device driver
130
149
which could be expected to run with latency of just over 4ms.
131
150
151
+ Alternatively this latency may be reduced by enabling the ` foo() ` instances to
152
+ yield in a low priority manner. In the case where all coros other than
153
+ ` handle_isr() ` are low priority the latency is reduced to 300μs - a figure
154
+ of about double the inherent latency of uasyncio.
155
+
156
+ The benchmark latency.py demonstrates this. Documentation is in the code; it
157
+ can be run against both official and priority versions. This measures scheduler
158
+ latency. Maximum application latency, measured relative to the incidence of an
159
+ asynchronous event, will be 300μs plus the worst-case delay between yields of
160
+ any one competing task.
161
+
132
162
### 2.1.1 I/O latency
133
163
134
- The current version of ` uasyncio ` has even higher levels of latency for I/O
164
+ The official version of ` uasyncio ` has even higher levels of latency for I/O
135
165
scheduling. In the above case of ten coros using 4ms of CPU time between zero
136
166
delay yields, the latency of an I/O driver would be 80ms.
137
167
@@ -196,19 +226,41 @@ The `fast_io` version enables awaitable classes and asynchronous iterators to
196
226
run with lower latency by designing them to use the stream I/O mechanism. The
197
227
program ` fast_io/ms_timer.py ` provides an example.
198
228
229
+ Practical cases exist where the ` foo() ` tasks are not time-critical: in such
230
+ cases the performance of time critical tasks may be enhanced by enabling
231
+ ` foo() ` to submit for rescheduling in a way which does not compete with tasks
232
+ requiring a fast response. In essence "slow" operations tolerate longer latency
233
+ and longer time delays so that fast operations meet their performance targets.
234
+ Examples are:
235
+
236
+ * User interface code. A system with ten pushbuttons might have a coro running
237
+ on each. A GUI touch detector coro needs to check a touch against sequence of
238
+ objects. Both may tolerate 100ms of latency before users notice any lag.
239
+ * Networking code: a latency of 100ms may be dwarfed by that of the network.
240
+ * Mathematical code: there are cases where time consuming calculations may
241
+ take place which are tolerant of delays. Examples are statistical analysis,
242
+ sensor fusion and astronomical calculations.
243
+ * Data logging.
244
+
199
245
###### [ Contents] ( ./FASTPOLL.md#contents )
200
246
201
247
# 3. The modified version
202
248
203
- The ` fast_io ` version adds an ` ioq_len=0 ` argument to ` get_event_loop ` . The
204
- zero default causes the scheduler to operate as per the official version. If an
205
- I/O queue length > 0 is provided, I/O performed by ` StreamReader ` and
206
- ` StreamWriter ` objects will be prioritised over other coros.
249
+ The ` fast_io ` version adds ` ioq_len=0 ` and ` lp_len=0 ` arguments to
250
+ ` get_event_loop ` . These determine the lengths of I/O and low priority queues.
251
+ The zero defaults cause the queues not to be instantiated. The scheduler
252
+ operates as per the official version. If an I/O queue length > 0 is provided,
253
+ I/O performed by ` StreamReader ` and ` StreamWriter ` objects will be prioritised
254
+ over other coros. If a low priority queue length > 0 is specified, tasks have
255
+ an option to yield in such a way to minimise competition with other tasks.
207
256
208
257
Arguments to ` get_event_loop() ` :
209
- 1 . ` runq_len ` Length of normal queue. Default 16 tasks.
210
- 2 . ` waitq_len ` Length of wait queue. Default 16.
211
- 3 . ` ioq_len ` Length of I/O queue. Default 0.
258
+ 1 . ` runq_len=16 ` Length of normal queue. Default 16 tasks.
259
+ 2 . ` waitq_len=16 ` Length of wait queue.
260
+ 3 . ` ioq_len=0 ` Length of I/O queue. Default: no queue is created.
261
+ 4 . ` lp_len=0 ` Length of low priority queue. Default: no queue.
262
+
263
+ ## 3.1 Fast I/O
212
264
213
265
Device drivers which are to be capable of running at high priority should be
214
266
written to use stream I/O: see
@@ -223,8 +275,31 @@ This behaviour may be desired where short bursts of fast data are handled.
223
275
Otherwise drivers of such hardware should be designed to avoid hogging, using
224
276
techniques like buffering or timing.
225
277
226
- The version also supports a ` version ` variable containing 'fast_io'. This
227
- enables the presence of this version to be determined at runtime.
278
+ ## 3.2 Low Priority
279
+
280
+ The low priority solution is based on the notion of "after" implying a time
281
+ delay which can be expected to be less precise than the asyncio standard calls.
282
+ The ` fast_io ` version adds the following awaitable instances:
283
+
284
+ * ` after(t) ` Low priority version of ` sleep(t) ` .
285
+ * ` after_ms(t) ` Low priority version of ` sleep_ms(t) ` .
286
+
287
+ It adds the following event loop methods:
288
+
289
+ * ` loop.call_after(t, callback, *args) `
290
+ * ` loop.call_after_ms(t, callback, *args) `
291
+ * ` loop.max_overdue_ms(t=None) ` This sets the maximum time a low priority task
292
+ will wait before being scheduled. A value of 0 corresponds to no limit. The
293
+ default arg ` None ` leaves the period unchanged. Always returns the period
294
+ value. If there is no limit and a competing task runs a loop with a zero delay
295
+ yield, the low priority yield will be postponed indefinitely.
296
+
297
+ See [ Low priority callbacks] ( ./FASTPOLL.md#35-low-priority-callbacks )
298
+
299
+ ## 3.3 Other Features
300
+
301
+ The version has a ` version ` variable containing 'fast_io'. This enables the
302
+ presence of this version to be determined at runtime.
228
303
229
304
It also supports a ` got_event_loop() ` function returning a ` bool ` : ` True ` if
230
305
the event loop has been instantiated. The purpose is to enable code which uses
@@ -246,6 +321,97 @@ bar = Bar() # Constructor calls get_event_loop()
246
321
# and renders these args inoperative
247
322
loop = asyncio.get_event_loop(runq_len = 40 , waitq_len = 40 )
248
323
```
324
+ ## 3.4 Low priority yield
325
+
326
+ Consider this code fragment:
327
+
328
+ ``` python
329
+ import uasyncio as asyncio
330
+ loop = asyncio.get_event_loop(lp_len = 16 )
331
+
332
+ async def foo ():
333
+ while True :
334
+ # Do something
335
+ await asyncio.after(1.5 ) # Wait a minimum of 1.5s
336
+ # code
337
+ await asyncio.after_ms(20 ) # Wait a minimum of 20ms
338
+ ```
339
+
340
+ These ` await ` statements cause the coro to suspend execution for the minimum
341
+ time specified. Low priority coros run in a mutually "fair" round-robin fashion.
342
+ By default the coro will only be rescheduled when all "normal" coros are waiting
343
+ on a nonzero time delay. A "normal" coro is one that has yielded by any other
344
+ means.
345
+
346
+ This behaviour can be overridden to limit the degree to which they can become
347
+ overdue. For the reasoning behind this consider this code:
348
+
349
+ ``` python
350
+ import uasyncio as asyncio
351
+
352
+ async def foo ():
353
+ while True :
354
+ # Do something
355
+ await asyncio.after(0 )
356
+ ```
357
+
358
+ By default a coro yielding in this way will be re-scheduled only when there are
359
+ no "normal" coros ready for execution i.e. when all are waiting on a nonzero
360
+ delay. The implication of having this degree of control is that if a coro
361
+ issues:
362
+
363
+ ``` python
364
+ while True :
365
+ await asyncio.sleep(0 )
366
+ # Do something which does not yield to the scheduler
367
+ ```
368
+
369
+ low priority tasks will never be executed. Normal coros must sometimes wait on
370
+ a non-zero delay to enable the low priority ones to be scheduled. This is
371
+ analogous to running an infinite loop without yielding.
372
+
373
+ This behaviour can be modified by issuing:
374
+
375
+ ``` python
376
+ loop = asyncio.get_event_loop(lp_len = 16 )
377
+ loop.max_overdue_ms(1000 )
378
+ ```
379
+
380
+ In this instance a task which has yielded in a low priority manner will be
381
+ rescheduled in the presence of pending "normal" tasks if they become overdue by
382
+ more than 1s.
383
+
384
+ ### 3.4.1 Task Cancellation and Timeouts
385
+
386
+ Tasks which yield in a low priority manner may be subject to timeouts or be
387
+ cancelled in the same way as normal tasks. See [ Task cancellation] ( ./TUTORIAL.md#36-task-cancellation )
388
+ and [ Coroutines with timeouts] ( ./TUTORIAL.md#44-coroutines-with-timeouts ) .
389
+
390
+ ###### [ Contents] ( ./FASTPOLL.md#contents )
391
+
392
+ ## 3.5 Low priority callbacks
393
+
394
+ The following ` EventLoop ` methods enable callback functions to be scheduled
395
+ to run when all normal coros are waiting on a delay or when ` max_overdue_ms `
396
+ has elapsed:
397
+
398
+ ` call_after ` Schedule a callback with low priority. Positional args:
399
+ 1 . ` delay ` Minimum delay in seconds. May be a float or integer.
400
+ 2 . ` callback ` The callback to run.
401
+ 3 . ` *args ` Optional comma-separated positional args for the callback.
402
+
403
+ The delay specifies a minimum period before the callback will run and may have
404
+ a value of 0. The period may be extended depending on other high and low
405
+ priority tasks which are pending execution.
406
+
407
+ A simple demo of this is ` benchmarks/call_lp.py ` . Documentation is in the
408
+ code.
409
+
410
+ ` call_after_ms(delay, callback, *args) ` Call with low priority. Positional
411
+ args:
412
+ 1 . ` delay ` Integer. Minimum delay in millisecs before callback runs.
413
+ 2 . ` callback ` The callback to run.
414
+ 3 . ` *args ` Optional positional args for the callback.
249
415
250
416
###### [ Contents] ( ./FASTPOLL.md#contents )
251
417
0 commit comments