You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<spanclass="sig-prename descclassname"><spanclass="pre">torch.cuda.</span></span><spanclass="sig-name descname"><spanclass="pre">stream</span></span><spanclass="sig-paren">(</span><emclass="sig-param"><spanclass="n"><spanclass="pre">stream</span></span></em><spanclass="sig-paren">)</span><aclass="reference internal" href="../_modules/torch/cuda.html#stream"><spanclass="viewcode-link"><spanclass="pre">[source]</span></span></a><aclass="headerlink" href="#torch.cuda.stream" title="Permalink to this definition">¶</a></dt>
437
-
<dd><p>Wrapper around the Context-manager StreamContext that
438
-
selects a given stream.</p>
432
+
<sectionid="stream">
433
+
<h1>Stream<aclass="headerlink" href="#stream" title="Permalink to this heading">¶</a></h1>
<emclass="property"><spanclass="pre">class</span><spanclass="w"></span></em><spanclass="sig-prename descclassname"><spanclass="pre">torch.cuda.</span></span><spanclass="sig-name descname"><spanclass="pre">Stream</span></span><spanclass="sig-paren">(</span><emclass="sig-param"><spanclass="n"><spanclass="pre">device</span></span><spanclass="o"><spanclass="pre">=</span></span><spanclass="default_value"><spanclass="pre">None</span></span></em>, <emclass="sig-param"><spanclass="n"><spanclass="pre">priority</span></span><spanclass="o"><spanclass="pre">=</span></span><spanclass="default_value"><spanclass="pre">0</span></span></em>, <emclass="sig-param"><spanclass="o"><spanclass="pre">**</span></span><spanclass="n"><spanclass="pre">kwargs</span></span></em><spanclass="sig-paren">)</span><aclass="reference internal" href="../_modules/torch/cuda/streams.html#Stream"><spanclass="viewcode-link"><spanclass="pre">[source]</span></span></a><aclass="headerlink" href="#torch.cuda.Stream" title="Permalink to this definition">¶</a></dt>
437
+
<dd><p>Wrapper around a CUDA stream.</p>
438
+
<p>A CUDA stream is a linear sequence of execution that belongs to a specific
439
+
device, independent from other streams. See <aclass="reference internal" href="../notes/cuda.html#cuda-semantics"><spanclass="std std-ref">CUDA semantics</span></a> for
<ddclass="field-odd"><p><strong>stream</strong> (<aclass="reference internal" href="torch.cuda.Stream.html#torch.cuda.Stream" title="torch.cuda.Stream"><em>Stream</em></a>) – selected stream. This manager is a no-op if it’s
<li><p><strong>device</strong> (<aclass="reference internal" href="../tensor_attributes.html#torch.device" title="torch.device"><em>torch.device</em></a><em> or </em><aclass="reference external" href="https://docs.python.org/3/library/functions.html#int" title="(in Python v3.10)"><em>int</em></a><em>, </em><em>optional</em>) – a device on which to allocate
445
+
the stream. If <aclass="reference internal" href="torch.cuda.device.html#torch.cuda.device" title="torch.cuda.device"><codeclass="xref py py-attr docutils literal notranslate"><spanclass="pre">device</span></code></a> is <codeclass="docutils literal notranslate"><spanclass="pre">None</span></code> (default) or a negative
446
+
integer, this will use the current device.</p></li>
447
+
<li><p><strong>priority</strong> (<aclass="reference external" href="https://docs.python.org/3/library/functions.html#int" title="(in Python v3.10)"><em>int</em></a><em>, </em><em>optional</em>) – priority of the stream. Can be either
448
+
-1 (high priority) or 0 (low priority). By default, streams have
449
+
priority 0.</p></li>
450
+
</ul>
451
+
</dd>
452
+
</dl>
453
+
<divclass="admonition note">
454
+
<pclass="admonition-title">Note</p>
455
+
<p>Although CUDA versions >= 11 support more than two levels of
456
+
priorities, in PyTorch, we only support two levels of priorities.</p>
<spanclass="sig-name descname"><spanclass="pre">query</span></span><spanclass="sig-paren">(</span><spanclass="sig-paren">)</span><aclass="reference internal" href="../_modules/torch/cuda/streams.html#Stream.query"><spanclass="viewcode-link"><spanclass="pre">[source]</span></span></a><aclass="headerlink" href="#torch.cuda.Stream.query" title="Permalink to this definition">¶</a></dt>
461
+
<dd><p>Checks if all the work submitted has been completed.</p>
<spanclass="sig-name descname"><spanclass="pre">record_event</span></span><spanclass="sig-paren">(</span><emclass="sig-param"><spanclass="n"><spanclass="pre">event</span></span><spanclass="o"><spanclass="pre">=</span></span><spanclass="default_value"><spanclass="pre">None</span></span></em><spanclass="sig-paren">)</span><aclass="reference internal" href="../_modules/torch/cuda/streams.html#Stream.record_event"><spanclass="viewcode-link"><spanclass="pre">[source]</span></span></a><aclass="headerlink" href="#torch.cuda.Stream.record_event" title="Permalink to this definition">¶</a></dt>
<ddclass="field-odd"><p><strong>event</strong> (<aclass="reference internal" href="torch.cuda.Event.html#torch.cuda.Event" title="torch.cuda.Event"><em>torch.cuda.Event</em></a><em>, </em><em>optional</em>) – event to record. If not given, a new one
<spanclass="sig-name descname"><spanclass="pre">synchronize</span></span><spanclass="sig-paren">(</span><spanclass="sig-paren">)</span><aclass="reference internal" href="../_modules/torch/cuda/streams.html#Stream.synchronize"><spanclass="viewcode-link"><spanclass="pre">[source]</span></span></a><aclass="headerlink" href="#torch.cuda.Stream.synchronize" title="Permalink to this definition">¶</a></dt>
487
+
<dd><p>Wait for all the kernels in this stream to complete.</p>
488
+
<divclass="admonition note">
489
+
<pclass="admonition-title">Note</p>
490
+
<p>This is a wrapper around <codeclass="docutils literal notranslate"><spanclass="pre">cudaStreamSynchronize()</span></code>: see
491
+
<aclass="reference external" href="https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__STREAM.html">CUDA Stream documentation</a> for more info.</p>
<spanclass="sig-name descname"><spanclass="pre">wait_event</span></span><spanclass="sig-paren">(</span><emclass="sig-param"><spanclass="n"><spanclass="pre">event</span></span></em><spanclass="sig-paren">)</span><aclass="reference internal" href="../_modules/torch/cuda/streams.html#Stream.wait_event"><spanclass="viewcode-link"><spanclass="pre">[source]</span></span></a><aclass="headerlink" href="#torch.cuda.Stream.wait_event" title="Permalink to this definition">¶</a></dt>
498
+
<dd><p>Makes all future work submitted to the stream wait for an event.</p>
<ddclass="field-odd"><p><strong>event</strong> (<aclass="reference internal" href="torch.cuda.Event.html#torch.cuda.Event" title="torch.cuda.Event"><em>torch.cuda.Event</em></a>) – an event to wait for.</p>
502
+
</dd>
503
+
</dl>
504
+
<divclass="admonition note">
505
+
<pclass="admonition-title">Note</p>
506
+
<p>This is a wrapper around <codeclass="docutils literal notranslate"><spanclass="pre">cudaStreamWaitEvent()</span></code>: see
507
+
<aclass="reference external" href="https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__STREAM.html">CUDA Stream documentation</a> for more info.</p>
508
+
<p>This function returns without waiting for <codeclass="xref py py-attr docutils literal notranslate"><spanclass="pre">event</span></code>: only future
<spanclass="sig-name descname"><spanclass="pre">wait_stream</span></span><spanclass="sig-paren">(</span><emclass="sig-param"><spanclass="n"><spanclass="pre">stream</span></span></em><spanclass="sig-paren">)</span><aclass="reference internal" href="../_modules/torch/cuda/streams.html#Stream.wait_stream"><spanclass="viewcode-link"><spanclass="pre">[source]</span></span></a><aclass="headerlink" href="#torch.cuda.Stream.wait_stream" title="Permalink to this definition">¶</a></dt>
516
+
<dd><p>Synchronizes with another stream.</p>
517
+
<p>All future work submitted to this stream will wait until all kernels
518
+
submitted to a given stream at the time of call complete.</p>
<ddclass="field-odd"><p><strong>stream</strong> (<aclass="reference internal" href="#torch.cuda.Stream" title="torch.cuda.Stream"><em>Stream</em></a>) – a stream to synchronize.</p>
522
+
</dd>
523
+
</dl>
524
+
<divclass="admonition note">
525
+
<pclass="admonition-title">Note</p>
526
+
<p>This function returns without waiting for currently enqueued
527
+
kernels in <aclass="reference internal" href="torch.cuda.stream.html#torch.cuda.stream" title="torch.cuda.stream"><codeclass="xref py py-attr docutils literal notranslate"><spanclass="pre">stream</span></code></a>: only future operations are affected.</p>
0 commit comments