Skip to content

Commit

Permalink
Merge branch 'main' into pythongh-109719
Browse files Browse the repository at this point in the history
  • Loading branch information
iritkatriel authored Sep 22, 2023
2 parents 6a5e675 + 3e8fcb7 commit 62d79d8
Show file tree
Hide file tree
Showing 16 changed files with 277 additions and 146 deletions.
1 change: 0 additions & 1 deletion Doc/library/itertools.rst
Original file line number Diff line number Diff line change
Expand Up @@ -845,7 +845,6 @@ which incur interpreter overhead.

def quantify(iterable, pred=bool):
"Given a predicate that returns True or False, count the True results."
"Count how many times the predicate is True"
return sum(map(pred, iterable))

def all_equal(iterable):
Expand Down
44 changes: 23 additions & 21 deletions Doc/whatsnew/3.12.rst
Original file line number Diff line number Diff line change
Expand Up @@ -291,9 +291,11 @@ can be used to customize buffer creation.
PEP 684: A Per-Interpreter GIL
------------------------------

Sub-interpreters may now be created with a unique GIL per interpreter.
:pep:`684` introduces a per-interpreter :term:`GIL <global interpreter lock>`,
so that sub-interpreters may now be created with a unique GIL per interpreter.
This allows Python programs to take full advantage of multiple CPU
cores.
cores. This is currently only available through the C-API,
though a Python API is :pep:`anticipated for 3.13 <554>`.

Use the new :c:func:`Py_NewInterpreterFromConfig` function to
create an interpreter with its own GIL::
Expand All @@ -312,22 +314,22 @@ create an interpreter with its own GIL::
For further examples how to use the C-API for sub-interpreters with a
per-interpreter GIL, see :source:`Modules/_xxsubinterpretersmodule.c`.

A Python API is anticipated for 3.13. (See :pep:`554`.)

(Contributed by Eric Snow in :gh:`104210`, etc.)

.. _whatsnew312-pep669:

PEP 669: Low impact monitoring for CPython
------------------------------------------

CPython 3.12 now supports the ability to monitor calls,
returns, lines, exceptions and other events using instrumentation.
:pep:`669` defines a new :mod:`API <sys.monitoring>` for profilers,
debuggers, and other tools to monitor events in CPython.
It covers a wide range of events, including calls,
returns, lines, exceptions, jumps, and more.
This means that you only pay for what you use, providing support
for near-zero overhead debuggers and coverage tools.

See :mod:`sys.monitoring` for details.

(Contributed by Mark Shannon in :gh:`103083`.)

New Features Related to Type Hints
==================================
Expand Down Expand Up @@ -459,12 +461,12 @@ and others in :gh:`103764`.)
Other Language Changes
======================

* Add :ref:`perf_profiling` through the new
environment variable :envvar:`PYTHONPERFSUPPORT`,
the new command-line option :option:`-X perf <-X>`,
* Add :ref:`support for the perf profiler <perf_profiling>` through the new
environment variable :envvar:`PYTHONPERFSUPPORT`
and command-line option :option:`-X perf <-X>`,
as well as the new :func:`sys.activate_stack_trampoline`,
:func:`sys.deactivate_stack_trampoline`,
and :func:`sys.is_stack_trampoline_active` APIs.
and :func:`sys.is_stack_trampoline_active` functions.
(Design by Pablo Galindo. Contributed by Pablo Galindo and Christian Heimes
with contributions from Gregory P. Smith [Google] and Mark Shannon
in :gh:`96123`.)
Expand All @@ -473,7 +475,7 @@ Other Language Changes
have a new a *filter* argument that allows limiting tar features than may be
surprising or dangerous, such as creating files outside the destination
directory.
See :ref:`tarfile-extraction-filter` for details.
See :ref:`tarfile extraction filters <tarfile-extraction-filter>` for details.
In Python 3.14, the default will switch to ``'data'``.
(Contributed by Petr Viktorin in :pep:`706`.)

Expand Down Expand Up @@ -501,8 +503,8 @@ Other Language Changes
* A backslash-character pair that is not a valid escape sequence now generates
a :exc:`SyntaxWarning`, instead of :exc:`DeprecationWarning`.
For example, ``re.compile("\d+\.\d+")`` now emits a :exc:`SyntaxWarning`
(``"\d"`` is an invalid escape sequence), use raw strings for regular
expression: ``re.compile(r"\d+\.\d+")``.
(``"\d"`` is an invalid escape sequence, use raw strings for regular
expression: ``re.compile(r"\d+\.\d+")``).
In a future Python version, :exc:`SyntaxError` will eventually be raised,
instead of :exc:`SyntaxWarning`.
(Contributed by Victor Stinner in :gh:`98401`.)
Expand Down Expand Up @@ -531,7 +533,7 @@ Other Language Changes
when summing floats or mixed ints and floats.
(Contributed by Raymond Hettinger in :gh:`100425`.)

* Exceptions raised in a typeobject's ``__set_name__`` method are no longer
* Exceptions raised in a class or type's ``__set_name__`` method are no longer
wrapped by a :exc:`RuntimeError`. Context information is added to the
exception as a :pep:`678` note. (Contributed by Irit Katriel in :gh:`77757`.)

Expand Down Expand Up @@ -567,7 +569,7 @@ asyncio
* Added :func:`asyncio.eager_task_factory` and :func:`asyncio.create_eager_task_factory`
functions to allow opting an event loop in to eager task execution,
making some use-cases 2x to 5x faster.
(Contributed by Jacob Bower & Itamar O in :gh:`102853`, :gh:`104140`, and :gh:`104138`)
(Contributed by Jacob Bower & Itamar Oren in :gh:`102853`, :gh:`104140`, and :gh:`104138`)

* On Linux, :mod:`asyncio` uses :class:`asyncio.PidfdChildWatcher` by default
if :func:`os.pidfd_open` is available and functional instead of
Expand All @@ -594,7 +596,7 @@ asyncio
(Contributed by Kumar Aditya in :gh:`99388`.)

* Add C implementation of :func:`asyncio.current_task` for 4x-6x speedup.
(Contributed by Itamar Ostricher and Pranav Thulasiram Bhat in :gh:`100344`.)
(Contributed by Itamar Oren and Pranav Thulasiram Bhat in :gh:`100344`.)

* :func:`asyncio.iscoroutine` now returns ``False`` for generators as
:mod:`asyncio` does not support legacy generator-based coroutines.
Expand Down Expand Up @@ -985,7 +987,7 @@ Optimizations
(Contributed by Serhiy Storchaka in :gh:`91524`.)

* Speed up :class:`asyncio.Task` creation by deferring expensive string formatting.
(Contributed by Itamar O in :gh:`103793`.)
(Contributed by Itamar Oren in :gh:`103793`.)

* The :func:`tokenize.tokenize` and :func:`tokenize.generate_tokens` functions are
up to 64% faster as a side effect of the changes required to cover :pep:`701` in
Expand All @@ -1000,9 +1002,9 @@ Optimizations
CPython bytecode changes
========================

* Remove the :opcode:`LOAD_METHOD` instruction. It has been merged into
* Remove the :opcode:`!LOAD_METHOD` instruction. It has been merged into
:opcode:`LOAD_ATTR`. :opcode:`LOAD_ATTR` will now behave like the old
:opcode:`LOAD_METHOD` instruction if the low bit of its oparg is set.
:opcode:`!LOAD_METHOD` instruction if the low bit of its oparg is set.
(Contributed by Ken Jin in :gh:`93429`.)

* Remove the :opcode:`!JUMP_IF_FALSE_OR_POP` and :opcode:`!JUMP_IF_TRUE_OR_POP`
Expand Down Expand Up @@ -1838,7 +1840,7 @@ New Features
* Added :c:func:`PyCode_AddWatcher` and :c:func:`PyCode_ClearWatcher`
APIs to register callbacks to receive notification on creation and
destruction of code objects.
(Contributed by Itamar Ostricher in :gh:`91054`.)
(Contributed by Itamar Oren in :gh:`91054`.)

* Add :c:func:`PyFrame_GetVar` and :c:func:`PyFrame_GetVarString` functions to
get a frame variable by its name.
Expand Down
14 changes: 8 additions & 6 deletions Lib/asyncio/subprocess.py
Original file line number Diff line number Diff line change
Expand Up @@ -147,15 +147,17 @@ def kill(self):

async def _feed_stdin(self, input):
debug = self._loop.get_debug()
if input is not None:
self.stdin.write(input)
if debug:
logger.debug(
'%r communicate: feed stdin (%s bytes)', self, len(input))
try:
if input is not None:
self.stdin.write(input)
if debug:
logger.debug(
'%r communicate: feed stdin (%s bytes)', self, len(input))

await self.stdin.drain()
except (BrokenPipeError, ConnectionResetError) as exc:
# communicate() ignores BrokenPipeError and ConnectionResetError
# communicate() ignores BrokenPipeError and ConnectionResetError.
# write() and drain() can raise these exceptions.
if debug:
logger.debug('%r communicate: stdin got %r', self, exc)

Expand Down
18 changes: 15 additions & 3 deletions Lib/concurrent/futures/process.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,11 @@ def __init__(self):
self._reader, self._writer = mp.Pipe(duplex=False)

def close(self):
# Please note that we do not take the shutdown lock when
# calling clear() (to avoid deadlocking) so this method can
# only be called safely from the same thread as all calls to
# clear() even if you hold the shutdown lock. Otherwise we
# might try to read from the closed pipe.
if not self._closed:
self._closed = True
self._writer.close()
Expand Down Expand Up @@ -426,8 +431,12 @@ def wait_result_broken_or_wakeup(self):
elif wakeup_reader in ready:
is_broken = False

with self.shutdown_lock:
self.thread_wakeup.clear()
# No need to hold the _shutdown_lock here because:
# 1. we're the only thread to use the wakeup reader
# 2. we're also the only thread to call thread_wakeup.close()
# 3. we want to avoid a possible deadlock when both reader and writer
# would block (gh-105829)
self.thread_wakeup.clear()

return result_item, is_broken, cause

Expand Down Expand Up @@ -717,7 +726,10 @@ def __init__(self, max_workers=None, mp_context=None,
# as it could result in a deadlock if a worker process dies with the
# _result_queue write lock still acquired.
#
# _shutdown_lock must be locked to access _ThreadWakeup.
# _shutdown_lock must be locked to access _ThreadWakeup.close() and
# .wakeup(). Care must also be taken to not call clear or close from
# more than one thread since _ThreadWakeup.clear() is not protected by
# the _shutdown_lock
self._executor_manager_thread_wakeup = _ThreadWakeup()

# Create communication channels for the executor
Expand Down
52 changes: 42 additions & 10 deletions Lib/test/test_asyncio/test_subprocess.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
import os
import signal
import sys
import textwrap
import unittest
import warnings
from unittest import mock
Expand All @@ -12,9 +13,14 @@
from test import support
from test.support import os_helper

if sys.platform != 'win32':

MS_WINDOWS = (sys.platform == 'win32')
if MS_WINDOWS:
import msvcrt
else:
from asyncio import unix_events


if support.check_sanitizer(address=True):
raise unittest.SkipTest("Exposes ASAN flakiness in GitHub CI")

Expand Down Expand Up @@ -270,26 +276,43 @@ async def send_signal(proc):
finally:
signal.signal(signal.SIGHUP, old_handler)

def prepare_broken_pipe_test(self):
def test_stdin_broken_pipe(self):
# buffer large enough to feed the whole pipe buffer
large_data = b'x' * support.PIPE_MAX_SIZE

rfd, wfd = os.pipe()
self.addCleanup(os.close, rfd)
self.addCleanup(os.close, wfd)
if MS_WINDOWS:
handle = msvcrt.get_osfhandle(rfd)
os.set_handle_inheritable(handle, True)
code = textwrap.dedent(f'''
import os, msvcrt
handle = {handle}
fd = msvcrt.open_osfhandle(handle, os.O_RDONLY)
os.read(fd, 1)
''')
from subprocess import STARTUPINFO
startupinfo = STARTUPINFO()
startupinfo.lpAttributeList = {"handle_list": [handle]}
kwargs = dict(startupinfo=startupinfo)
else:
code = f'import os; fd = {rfd}; os.read(fd, 1)'
kwargs = dict(pass_fds=(rfd,))

# the program ends before the stdin can be fed
proc = self.loop.run_until_complete(
asyncio.create_subprocess_exec(
sys.executable, '-c', 'pass',
sys.executable, '-c', code,
stdin=subprocess.PIPE,
**kwargs
)
)

return (proc, large_data)

def test_stdin_broken_pipe(self):
proc, large_data = self.prepare_broken_pipe_test()

async def write_stdin(proc, data):
await asyncio.sleep(0.5)
proc.stdin.write(data)
# Only exit the child process once the write buffer is filled
os.write(wfd, b'go')
await proc.stdin.drain()

coro = write_stdin(proc, large_data)
Expand All @@ -300,7 +323,16 @@ async def write_stdin(proc, data):
self.loop.run_until_complete(proc.wait())

def test_communicate_ignore_broken_pipe(self):
proc, large_data = self.prepare_broken_pipe_test()
# buffer large enough to feed the whole pipe buffer
large_data = b'x' * support.PIPE_MAX_SIZE

# the program ends before the stdin can be fed
proc = self.loop.run_until_complete(
asyncio.create_subprocess_exec(
sys.executable, '-c', 'pass',
stdin=subprocess.PIPE,
)
)

# communicate() must ignore BrokenPipeError when feeding stdin
self.loop.set_exception_handler(lambda loop, msg: None)
Expand Down
72 changes: 71 additions & 1 deletion Lib/test/test_concurrent_futures/test_deadlock.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,13 @@
import contextlib
import queue
import signal
import sys
import time
import unittest
import unittest.mock
from pickle import PicklingError
from concurrent import futures
from concurrent.futures.process import BrokenProcessPool
from concurrent.futures.process import BrokenProcessPool, _ThreadWakeup

from test import support

Expand Down Expand Up @@ -241,6 +244,73 @@ def test_crash_big_data(self):

executor.shutdown(wait=True)

def test_gh105829_should_not_deadlock_if_wakeup_pipe_full(self):
# Issue #105829: The _ExecutorManagerThread wakeup pipe could
# fill up and block. See: https://github.com/python/cpython/issues/105829

# Lots of cargo culting while writing this test, apologies if
# something is really stupid...

self.executor.shutdown(wait=True)

if not hasattr(signal, 'alarm'):
raise unittest.SkipTest(
"Tested platform does not support the alarm signal")

def timeout(_signum, _frame):
import faulthandler
faulthandler.dump_traceback()

raise RuntimeError("timed out while submitting jobs?")

thread_run = futures.process._ExecutorManagerThread.run
def mock_run(self):
# Delay thread startup so the wakeup pipe can fill up and block
time.sleep(3)
thread_run(self)

class MockWakeup(_ThreadWakeup):
"""Mock wakeup object to force the wakeup to block"""
def __init__(self):
super().__init__()
self._dummy_queue = queue.Queue(maxsize=1)

def wakeup(self):
self._dummy_queue.put(None, block=True)
super().wakeup()

def clear(self):
try:
while True:
self._dummy_queue.get_nowait()
except queue.Empty:
super().clear()

with (unittest.mock.patch.object(futures.process._ExecutorManagerThread,
'run', mock_run),
unittest.mock.patch('concurrent.futures.process._ThreadWakeup',
MockWakeup)):
with self.executor_type(max_workers=2,
mp_context=self.get_context()) as executor:
self.executor = executor # Allow clean up in fail_on_deadlock

job_num = 100
job_data = range(job_num)

# Need to use sigalarm for timeout detection because
# Executor.submit is not guarded by any timeout (both
# self._work_ids.put(self._queue_count) and
# self._executor_manager_thread_wakeup.wakeup() might
# timeout, maybe more?). In this specific case it was
# the wakeup call that deadlocked on a blocking pipe.
old_handler = signal.signal(signal.SIGALRM, timeout)
try:
signal.alarm(int(self.TIMEOUT))
self.assertEqual(job_num, len(list(executor.map(int, job_data))))
finally:
signal.alarm(0)
signal.signal(signal.SIGALRM, old_handler)


create_executor_tests(globals(), ExecutorDeadlockTest,
executor_mixins=(ProcessPoolForkMixin,
Expand Down
Loading

0 comments on commit 62d79d8

Please sign in to comment.