Skip to content

Commit

Permalink
PEP 789: can't just deliver exceptions
Browse files Browse the repository at this point in the history
  • Loading branch information
Zac-HD committed May 20, 2024
1 parent 41b2684 commit 76423d2
Showing 1 changed file with 38 additions and 4 deletions.
42 changes: 38 additions & 4 deletions peps/pep-0789.rst
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ within that context (...scope). In asyncio, this is implicit in the design of
respectively cancel the contained work after the specified duration, or cancel
sibling tasks when one of them raises an exception. The core functionality of
a cancel scope is synchronous, but the user-facing context managers may be
either sync or async. [#trio-cancel-scope]_
either sync or async. [#trio-cancel-scope]_ [#tg-cs]_

This structured approach works beautifully, unless you hit one specific sharp
edge: breaking the nesting structure by ``yield``\ ing inside a cancel scope.
Expand All @@ -69,6 +69,11 @@ Let's consider three examples, to see what this might look like in practice.
here, but say "cancel scope" when referring to the framework-independent
concept.
.. [#tg-cs]
A ``TaskGroup`` is not _only_ a cancel scope, but preventing yields would
resolve their further problem too. See :ref:`just-deliver`.
Leaking a timeout to the outer scope
------------------------------------

Expand Down Expand Up @@ -490,9 +495,6 @@ cancellations are eventually directed to the correct scope, but only after they
had wreaked havoc elsewhere. Plausibly still useful to ensure that cleanup is
*timely*, but does not solve this problem.

:pep:`568` - would make it possible to work around some bugs which this PEP
makes impossible. We recommend marking it as rejected.

If you want more details on all the specific problems that arise, and how they
relate to this proposal, and to PEP 533 and PEP 568, then see `this comment
<https://github.com/python-trio/trio/issues/264#issuecomment-418989328>`__ and
Expand Down Expand Up @@ -584,6 +586,38 @@ I'd suggest two library features to partially replace them:
<https://discuss.python.org/t/using-exceptiongroup-at-anthropic-experience-report/20888>`__
.. _just-deliver:

Can't we just deliver exceptions to the _right_ place?
------------------------------------------------------

If we implemented :pep:`568` (Generator-sensitivity for Context Variables; see
also :pep:`550`), it would be possible to handle exceptions from timeouts: the
event loop could avoid firing a ``CancelledError`` until the generator frame
which contains the context manager is on the stack - either when the generator
is resumed, or when it is finalized.

This can take arbitrarily long; even if we implemented :pep:`533` to ensure
timely cleanup on exiting (async) for-loops it's still possible to drive a
generator manually with next/send.

However, this doesn't address the other problem with ``TaskGroup``. The model
for generators is that you put a stack frame in suspended animation and can then
treat it as an inert value which can be stored, moved around, and maybe
discarded or revived in some arbitrary place. The model for structured
concurrency is that your stack becomes a tree, with child tasks encapsulated
within some parent frame. They're extending the basic structured programming
model in different, and unfortunately incompatible, directions.

Note that ``TaskGroup`` *would* play nicely with generators if suspending the
frame with the context manager also suspended all child tasks. Note also that
this would cause all of our motivating examples to deadlock, as we wait for
values to be produced by suspended child tasks - a prohibitive design problem.

We don't think it's worth adding this much machinery to handle cancel scopes,
while leaving task groups (and no-exception cases) broken.


Copyright
=========

Expand Down

0 comments on commit 76423d2

Please sign in to comment.